entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.04618v1 | 20230710150416 | Scalar fields with derivative coupling to curvature in the Palatini and the metric formulation | [
"Hamed Bouzari Nezhad",
"Syksy Rasanen"
] | gr-qc | [
"gr-qc"
] |
=1
|
http://arxiv.org/abs/2307.04810v1 | 20230710180135 | Great Inequality of Jupiter and Saturn I: The Planetary Three Body Problem, Heliocentric development by Lagrange multipliers, Perturbation Theory Formulation | [
"Jonathan Tot",
"S. R. Valluri",
"P. C. Deshmukh"
] | physics.class-ph | [
"physics.class-ph",
"math.DS",
"70F07 (Primary) 37J40"
] |
In this paper, we undertake to present a self-contained and thorough analysis of the gravitational three body problem, with anticipated application to the Great Inequality of Jupiter and Saturn. The analysis of the three body Lagrangian is very convenient in heliocentric coordinates with Lagrange multipliers, the coordinates being the vector-sides r⃗_i, i=1,2,3 of the triangle that the bodies form. In two dimensions to begin with, the equations of motion are formulated into a dynamical system for the polar angles θ_i, angular momenta ℓ_i and eccentricity vectors e⃗_i. The dynamical system is simplified considerably by change of variables to certain auxiliary vector f⃗_i=r̂_i+e⃗_i. We then begin to formulate the Hamiltonian perturbation theory of the problem, now in three dimensions. We first give the geometric definitions for the Delaunay action-angle variables of the two body problem. We express the three body Hamiltonian in terms of Delaunay variables in each sector i=1,2,3, revealing that it is a nearly integrable Hamiltonian. We then present the KAM theory perturbative approach that will be followed in future work, including the modification that will be required because the Hamiltonian is degenerate.
Great Inequality of Jupiter and Saturn I: The Planetary Three Body Problem, Heliocentric development by Lagrange multipliers, Perturbation Theory Formulation
Jonathan Tot
Department of Mathematics and Statistics, Dalhousie University,
Halifax, Nova Scotia, Canada B3H 4R2
mailto:[email protected]@dal.ca
S.R. Valluri
Department of Physics and Astronomy, University of Western Ontario
and Mathematics, King’s University College
London, Ontario, Canada N6A 3K7
mailto:[email protected]@uwo.ca
P.C. Deshmukh
CAMOST, IIT Tirupati and IISER Tirupati,
Tirupati, Andhra Pradesh 517619, India
mailto:[email protected]@iittp.ac.in
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
PART:
*
§ INTRODUCTION
The three body-problem and the Great Inequality of Jupiter and Saturn are much studied from the 18th and 19th centuries. They are part of the history of the development of Celestial Mechanics, inaugurated by the publication of Newton’s Principia in 1687 <cit.>, and many of the great names that come down to us from that time were heavily involved, most chiefly the mathematical techniques proposed and analyses completed by Leonhard Euler, Joseph-Louis Lagrange, and finally Pierre-Simon Laplace, in his Théorie de Jupiter et de Saturne, written in 1785 <cit.>. These advances were enabled by increasing observational accuracy during the time of John Flamsteed, the first Astronomer Royal, and astronomical forecasts of such figures as Cassini and Halley <cit.>.
Following the grand realization, due to Laplace, that the observed discrepancies of Jupiter and Saturn’s motions from the essentially Keplerian predictions could actually be accounted for by the mutual Newtonian gravitation of the planets, other researchers into the 19th century continued to make improved theories and contributions to the methods, perhaps most notably the work of G.W. Hill <cit.> and the series manipulations of Charles Delaunay <cit.>.
The study of the three body problem, as a mathematical problem in its own right, also began with the widespread acknowledgement of Newtonian universal gravitation. Euler (1767) and Lagrange (1772) both initially found particular periodic solutions, which today are understood to be associated with central configurations of the system <cit.>. Much of the early focus was given to the restricted three body problem, in which one mass is negligible relative to the other two, so that the larger masses form a Keplerian system, and the third body orbits in the gravitational field of the first two. It was Euler who first set the (circular) restricted three body problem in rotating coordinates, and Lagrange who demonstrated the existence of equilibrium points within the rotating frame, today known as the five Lagrange points, which correspond to five orbits for the third mass with the same orbital frequency as the two-body system.
In the late 1800s, Poincaré studied the general three body problem. Famously, his work won the prize competition for the 60th birthday of Oscar II, the King of Sweden and Norway in 1885. Poincaré’s work on the three body problem led him to consider what are now known as Poincaré sections and first return maps, and these insights ultimately led to the development of Kolmogorov-Arnold-Moser theory in the mid-20th century <cit.>. In the large, three-volume Les Méthodes Nouvelles de la Mécanique Céleste (1892-99) <cit.>, Poincaré saw, in the unpredictable nature of three-body problem solutions, the first glimpses of chaotic dynamics, which dominates much of dynamical systems analysis today <cit.>.
In this work we present a self-contained analysis of the planetary three body problem, in which two lighter masses orbit a heavier central body, with particular application to the Sun-Jupiter-Saturn system. Working with modern mathematical notation and physical terminology, one aim of this work is pedagogical in nature, that the subject matter would be more accessible to modern audiences. For readers who are new to the subject, we put classical terminology in italics upon their first occurrence and definition in the text. Jupiter and Saturn’s conjunction in December 2020 gained much attention in the media <cit.>, so popular presentation of this work should serve to foster public engagement in mathematics and the sciences.
Crucial to our analysis is the work of Brouke and Lass <cit.>, who formulate the Lagrangian problem in terms of the three vector-sides of the triangle that the bodies form, using Lagrange multipliers. In Part <ref>, we present this treatment of the three body problem in heliocentric coordinates. In <ref> we show how total energy and total angular momentum are conserved in this scheme. Staying in two dimensions in <ref>, we transform the equations of motion into a system of first order ODEs for the polar angles, angular momenta and eccentricity vectors e⃗_i of the three sectors i=1,2,3 of the model. We also demonstrate how the constraint can be employed to remove the third sector entirely, leaving equations for only i=1,2, corresponding to the planets. Auxiliary vectors f⃗_i=r̂_i+e⃗_i for each sector are introduced in <ref>, which make a considerable simplification to the algebraic form of the dynamical equations. The geometrical properties of the auxiliary vector is explored. We also present alternate forms of these equations in <ref>, in terms of the polar representations (e_i,β_i), (f_i,ψ_i) of eccentricity and auxiliary vectors, respectively.
In Part <ref> we return to three dimensions, and move toward the perturbational analysis of this problem. In <ref> we begin with the geometric definition and construction of the Delaunay action-angle variables for the two-body problem, with particular focus on the mean anomaly. In <ref> we present Hamilton's equations for the problem, in terms of the the perturbing function 𝐑=-λ⃗·(r⃗_1+r⃗_2+r⃗_3), where λ⃗ is the Lagrange multiplier, r⃗_1+r⃗_2+r⃗_3=0 being the constraint. Finally, in <ref> we present the basic approach and the setup of Kolmogorov-Arnold-Moser (KAM) theory
for this problem.
PART:
The Planetary Three Body Problem
Let the masses of light bodies be m_J, m_S respectively, and M the mass of a heavier body. Let μ=m_S/m_J, expected to be 𝒪(1), while ϵ=m_J/M is small ϵ≪ 1. For Jupiter, Saturn and the sun these are
[ [ M=1.989×10^30 kg; m_J=1.898×10^27 kg; m_S=5.683×10^26 kg ] [ ϵ=9.54..×10^-4≈10^-3; μ=0.2994..≈0.3 ] ]
Let the positions of the bodies in an inertial reference
frame be X⃗ for mass M, and x⃗_J,x⃗_S for m_J,m_S. Then the Lagrangian is
ℒ_in=1/2(MẊ⃗̇^2+m_Jẋ⃗̇_J^2+m_Sẋ⃗̇_S^2)+G(M m_J/x⃗_J-X⃗+M m_S/x⃗_S-X⃗+m_J m_S/x⃗_S-x⃗_J)
We will make a change of variables to: the center of mass and heliocentric coordinates
R⃗ =MX⃗+m_Jx⃗_J+m_Sx⃗_S/M_Σ
r⃗_J =x⃗_J-X⃗
r⃗_S =X⃗-x⃗_S
where M_Σ=M+m_J+m_S is the total mass. With these the Lagrangian becomes
ℒ_in= 1/2M_ΣṘ⃗̇^2+1/2M/M_Σ{ m_Jṙ⃗̇_J^2+m_Sṙ⃗̇_S^2+m_J m_S/Mṙ⃗̇_J+ṙ⃗̇_S^2}
+m_JM/M_Σ· GM_Σ(r_J^-1+m_S/m_J/r_S+m_S/M/r⃗_J+r⃗_S)
We see the center-of-mass coordinate R⃗ is cyclic; it's dynamics decouple from the other variables, and so can be ignored. We will consequently drop the first term of (<ref>). Then taking a factor m_J out of the kinetic terms, we can see both remaining
terms are proportional to m̃_J=M/M_Σm_J, which we may regard as a reduced mass of `Jupiter'. Then a reduced and normalized Lagrangian is
𝐋_in=ℒ_in/m̃_J=1/2(ṙ⃗̇_J^2+μṙ⃗̇_S^2+ϵμṙ⃗̇_J+ṙ⃗̇_S^2)+α(r_J^-1+μ r_S^-1+ϵμ/r⃗_J+r⃗_S)
where α=GM_Σ is the gravitational parameter. Often α is taken to be 1. However, we will work in unit's of Jupiter's average distance and Jupiter's year, so that we should take α=R^3ω^2=4π^2.
At this stage we recognize 1) that the first and second terms of each parentheses, what can be called the `Jupiter' and `Saturn' terms, are just like the terms for the displacement vector of a two body problem, with large central mass at the origin, and 2) that the third terms, proportional to ϵμ, also look like two-body problem terms, but with displacement vector r⃗_SJ=±(r⃗_J+r⃗_S). Taking the minus-sign option, which gives r⃗_SJ=x⃗_S-x⃗_J, we find we are dealing with a Lagrangian for three independent two-body problems, with vectors r⃗_J,r⃗_S and r⃗_SJ that satisfy the condition r⃗_J+r⃗_S+r⃗_SJ=0. Looking back at our definition of these vectors in terms of inertial coordinates X⃗,x⃗_J,x⃗_S, the constraint is satisfied identically:
(x⃗_J-X⃗)+(X⃗-x⃗_S)+(x⃗_S-x⃗_J)≡0
We see that the three vectors are the sides of the triangle that the three bodies form. The system is analogous to three bodies, of relative masses 1,μ and ϵμ with respect to the first mass, all orbiting a central, stationary mass located at the origin, such that the gravitational parameter for each two-body problem is α=GM_Σ. Of particular note is that these three supposed bodies orbiting a central mass do not gravitate to each other. The constraint is maintained by forces that are sourced by Lagrange multipliers. We thus consider the modified Lagrangian
𝐋_λ=∑_i{μ_i(1/2r⃗_i^2+α/r_i)+λ⃗·r⃗_i}
where the sum is on i=J,S,SJ or simply i=1,2,3, and μ_i=1,μ,ϵμ. The additional terms are λ⃗·∑_i=1^3r⃗_i, so that the constraint equation is
∑_i=1^3r⃗_i =0
The Euler-Lagrange equations are
μ_ir̈⃗̈_i=-μ_iα r⃗_i/r_i^3+λ⃗ , i=1,2,3.
The dynamical version of the constraint equation is
∑_i=1^3r̈⃗̈_i =0
If, in addition to this, we have initial conditions that satisfy both
∑_i=1^3r⃗_i =0 and ∑_i=1^3ṙ⃗̇_i =0
then the condition (<ref>) will be satisfied for all time. This allows us to solve for the Lagrange multipliers by taking linear combination of the equations (<ref>). We have
0=∑_i=1^3r̈⃗̈_i=-α(∑_i=1^3r̂_i/r_i^2)+(∑_i=1^31/μ_i)λ⃗
and thus
λ⃗=αδ∑_i=1^3r̂_i/r_i^2
where the coefficient δ is the reciprocal of the sum of reciprocal masses
δ=(∑_i=1^31/μ_i)^-1=ϵμ/1+ϵ+ϵμ=m_S/M_Σ=m̃_S/M=2.8536..× 10^-4
The equations (<ref>) are
r̈⃗̈_i=A_ijr̂_j/r_j^2 (summation on j)
where the matrix A of coefficients is
A =α([ -1+ϵ/1+ϵ+ϵμ δ δ; ϵ/1+ϵ+ϵμ -1+ϵμ/1+ϵ+ϵμ ϵ/1+ϵ+ϵμ; (1+ϵ+ϵμ)^-1 (1+ϵ+ϵμ)^-1 -ϵ(1+μ)/1+ϵ+ϵμ ])
=G([ -(M+m_J) m_S m_S; m_J -(M+m_S) m_J; M M -(m_J+m_S) ]).
That each column sums to 0 corresponds to r̈⃗̈_1+r̈⃗̈_2+r̈⃗̈_3=0.
§ CONSTANTS OF MOTION
§.§ Energy
The conjugate linear momenta are
p⃗_i=∇_ṙ⃗̇_i𝐋_λ=μ_iṙ⃗̇_i
and by Legendre transform, the Hamiltonian is
𝐇_λ =∑_ip⃗_i·ṙ⃗̇_i -𝐋_λ
=∑_i{p_i^2/μ_i-(1/2p_i^2/μ_i+μ_iα/r_i+λ⃗·r⃗_i)}
=∑_i{p_i^2/2μ_i-μ_iα/r_i-λ⃗·r⃗_i }
thus when the constraint (<ref>) is satisfied, the following (reduced) energy is conserved
ξ=E/m̃_J=∑_iμ_i(1/2ṙ⃗̇_i^2-α/r_i)=∑_i μ_i ξ_i
where ξ_i=ṙ⃗̇_i^2/2-α/r_i is the specific energy for each sector of the model.
§.§ Angular Momentum
Now we put the dynamical vectors r⃗_i in polar coordinates. Up to this point, we could have been in three dimensions, but for now we work in two dimensions. Each vector is r⃗_i(t)=r_i(t)θ̂_i(t), where r̂(θ)=(cosθ, sinθ)^T
ṙ⃗̇_i^2=ṙ_i^2+r_i^2θ̇_i^2
momenta conjugate to θ_i are
h_i=∂ 𝐋_λ/∂θ̇_i=μ_i r_i^2θ̇_i
and the Euler-Lagrange equations are
ḣ_i=∂ 𝐋_λ/∂θ_i=∂/∂θ_i(λ⃗·r⃗_i)
=λ⃗·r_iθ̂_i
=λ⃗·r_iTr̂_i
=λ⃗·Tr⃗_i
where θ̂ is the vector function θ̂(φ)=(-sinφ, cosφ)^T, θ̂_i is the evaluation θ̂(θ_i(t)), and T is the 2×2 matrix T=[[ 0 -1; 1 0 ]], which is ccw-rotation by π/2, so that θ̂=Tr̂. This shows that the total angular momentum satisfies
ḣ=∑_i=1^3ḣ_i=λ⃗·T∑_ir⃗_i=0 by the constraint.
The presence of the rotation matrix T in ḣ shows that conservation of angular momentum is due to the fact that the system as a whole is invariant under global rotation.
Going forward, it will be best to use specific angular momenta
ℓ_i=h_i/μ_i=r_i^2θ̇_i
which gives
ℓ̇_i=1/μ_iλ⃗·r_iθ̂
Using the Lagrange multiplier (<ref>), this is
ℓ̇_i =αδ/μ_i(∑_jr̂_j/r_j^2)·r_iθ̂_i
∴ℓ̇_i =αδ/μ_ir_i∑_j≠ ir_j^-2sin(θ_j-θ_i)
we shall label these functions ℓ̇_i=τ_i=τ_i(r_j,θ_j,ℓ_j); τ for torque. Notice that these equations are perturbations for i=1,2, since μ_1,2∼𝒪(1), but this is not the case for i=3; μ_3=ϵμ∼𝒪(δ), so the coefficient in the equation for ℓ̇_3 is 𝒪(1).
§.§.§ Angular Momentum in Three Dimensions
In three dimensions, we have the vector (total) angular momentum
h⃗=∑_ir⃗_i×p⃗_i=∑_iμ_ir⃗_i×ṙ⃗̇_i
the time-derivative of which is
ḣ⃗̇ =∑_ir⃗_i×ṗ⃗̇_i
=∑_i r⃗_i×(-μ_iαr̂_i/r_i^2+λ⃗)
=(∑_ir⃗_i)×λ⃗
so angular momentum is conserved by the constraint.
§ RETURNING TO THE EQUATIONS OF MOTION
r̈⃗̈_i=A_ijr̂_j/r_j^2
With r⃗_i(t)=r_i(t)θ̂_̂î(̂t̂)̂, we develop these equations in polar coordinates. We will work out the r̂_i- and θ̂_i-components for the i^th equation of (<ref>).
ṙ⃗̇_i=ṙ_ir̂_i+r_iθ̇_iθ̂_i
so ṙ⃗̇_i^2=ṙ_i^2+r_i^2θ̇_i^2 , and
r̈⃗̈_i=(r̈_i-r_iθ̇_i^2)r̂_i+(r_iθ̈_i+2ṙ_iθ̇_i)θ̂_i .
The θ̂_i-component of r̈⃗̈_i is nothing other than ℓ̇_i/r_i; indeed the θ̂_i-components of (<ref>) are just the torque equations (<ref>) derived above. That leaves us with the r̂_i-components
r̈_i-r_iθ̇^2_i=(∑_j=1^3A_ijr̂_j/r_j^2)·r̂_i
with θ̇_i=ℓ_i/r_i^2 and A_ii=α(-1+δ/μ_i), while A_ij=αδ/μ_i for j≠ i, this is
r̈_i=ℓ_i^2/r_i^3+A_ii/r_i^2+αδ/μ_i∑_j≠ icos(θ_j-θ_i)/r_j^2
∴ r̈_i=ℓ_i^2/r_i^3-α r_i^-2+αδ/μ_i∑_j=1^3cos(θ_j-θ_i)/r_j^2
Here, in analogy to the definition we have for τ_i=ℓ̇_i=(αδ/μ_i) r_i∑_jsin(θ_j-θ_i)/r_j^2, we define three functions σ_i as
σ_i =1/μ_iλ⃗·r⃗_i
=αδ/μ_i r_i∑_j=1^3cos(θ_j-θ_i)/r_j^2
then what we have is
r̈_i=ℓ_i^2/r_i^3-α r_i^-2+σ_i/r_i
and define the right-hand sides as functions a_i=a_i(θ,ℓ,r)
§.§ Eccentricity or Laplace-Runge-Lenz Vectors
At this stage, we may formulate our differential equations as a system of 12 first order ODEs
θ̇_̇i̇ =Ω_i=ℓ_i/r_i^2
ℓ̇_i =τ_i=αδ/μ_ir_i∑_j≠ ir_j^-2sin(θ_j-θ_i)
ṙ_i =v_i
v̇_i =a_i=ℓ_i^2/r_i^3-α r_i^-2+σ_i/r_i
Now we introduce a change of variables from (r_i,v_i) to eccentricity vectors, in each sector
e⃗=ṙ⃗̇×ℓ⃗/α-r̂.
This is a normalization of the Laplace-Runge-Lenz vector
A⃗=p⃗×L⃗-m^2α r̂=m^2α e⃗
where p⃗ is linear momentum and L⃗ is the (dimensionfull) angular momentum. In the two-body problem, these vectors are constants of motion. The eccentricity vector has magnitude equal to that of the eccentricity of the Keplerian orbit and points in the direction of periapsis. Working in two dimensions, our specific angular momenta are out of the plane, ℓ⃗=ℓẑ, so that in polar coordinates
e⃗ =(v r̂+r θ̇ θ̂)×(ℓ ẑ/α)-r̂
=(ℓ^2/α/r-1)r̂-ℓ v/α θ̂
(where ℓ/r^2 has been substituted for θ̇). This defines the eccentricity vector in terms of it's polar components
e^r=ℓ^2/α/r-1, e^θ=-ℓ v/α
the reverse change of variables being
r =ℓ^2/α/1+e^r
v =-αe^θ/ℓ.
Equation (<ref>) is precisely the form of a Keplerian elliptic orbit if ℓ and e⃗ are constant, as in the two-body problem, since
r=ℓ^2/α/1+e^r =ℓ^2/α/1+e⃗·r̂
=ℓ^2/α/1+e^xcosθ+e^ysinθ
=ℓ^2/α/1+ecos(θ-β)
where β is the angle of e⃗ from the positive x-axis, called the longitude of periapsis. Equation (<ref>) thus describes the osculating orbit to the trajectory, which is the elliptic orbit that a body would follow given it's instantaneous angular momentum and eccentricity. As a further note, if we take the time-derivative of (<ref>), writing τ for ℓ̇, we find
v=ṙ=1/α(2ℓτ/1+e^r-ℓ^2/(1+e^r)^2(ė⃗̇·r̂+ℓ/r^2e⃗·θ̂))=1/α(2ℓτ/1+e^r-r^2/ℓ^2ė⃗̇·r̂)-αe^θ/ℓ
which reduces to precisely (<ref>) if τ and ė⃗̇ are 0.
§.§ System of Equations using Eccentricity vectors
We can differentiate the definitions (<ref>) and use the system (<ref>-<ref>) to derive first order differential equations for the eccentricity vectors ė⃗̇_i=…(θ,ℓ,e⃗ ). First working in one sector (no indices i), writing τ for ℓ̇, we have
ė⃗̇ =1/α(2ℓτ/r-ℓ^2v/r )r̂+(ℓ^2/α/r-1)ℓ/r^2θ̂-(τ v+ℓ r̈) θ̂/α+ℓ v/αℓ/r^2r̂
= 2ℓτ/α rr̂-ℓ/α(r̈-ℓ^2/r^3+α r^-2+vτ/ℓ)θ̂.
Substituting v=-α e^θ/ℓ and r̈=a=ℓ^2/r^3-α r^-2+σ/r givesė⃗̇ =2ℓτ/α rr̂+(τ/ℓ e^θ-ℓσ/α r)θ̂.
Pair half of the first term with the second θ̂-termė⃗̇ =ℓτ/α rr̂+τ/ℓe^θθ̂+ℓ/α r(τr̂-σθ̂).
In the first term we substitute ℓ/α r=(1+e^r)/ℓė⃗̇ =τ/ℓ(1+e^r)r̂ + τ/ℓe^θθ̂+ℓ/α r(τr̂-σθ̂)
=τ/ℓ(r̂+e^rr̂+e^θθ̂)+ℓ/α r(τr̂-σθ̂).
Thus we findė⃗̇ =τ/ℓ(e⃗+r̂)+ℓ/α r(τ r̂-σ θ̂).
The combination (τ r̂-σ θ̂)/r is, in each sector
(τ_i r̂_i-σ_i θ̂_i)/r_i= (αδ/μ_i∑_j=1^3sin(θ_j-θ_i)/r_j^2)[[ cosθ_i; sinθ_i ]]
-(αδ/μ_ir_i∑_j=1^3cos(θ_j-θ_i)/r_j^2)[[ -sinθ_i; cosθ_i ]]
= αδ/μ_i ∑_j=1^3r_j^-2[[ sinθ_j; -cosθ_j ]]
= -αδ/μ_i ∑_j=1^3r_j^-2θ̂_j=-αδ/μ_i ∑_j=1^3r_j^-2 Tr̂_j
∴ (τ_i r̂_i-σ_i θ̂_i)/r_i= -Tλ⃗/μ_i
So finally, our differential equations for ė⃗̇_i are
ė⃗̇_i=τ_i/ℓ_i(e⃗_i+r̂_i)-ℓ_i/αμ_i Tλ⃗.
Observe that these equations are ∝ δ/μ_i, so that ė⃗̇_i are 𝒪(δ) for i=1,2, i.e. Jupiter and Saturn, while ė⃗̇_3 is 𝒪(1). The system of twelve first order ODEs for the variables (θ_i,ℓ_i,e⃗_i) is
θ̇_̇i̇ =Ω_i=ℓ_i/r_i^2
ℓ̇_i =τ_i=αδ/μ_ir_i∑_j≠ ir_j^-2sin(θ_j-θ_i)
ė⃗̇_i =τ_i/ℓ_i(e⃗_i+r̂_i)-ℓ_i/αμ_iTλ⃗ for i=1,2,3
where r_i=ℓ_i^2/[α(1+e^r_i)]
Mixing between the sectors enters through the torques τ_i and λ⃗.
§.§ Reduction by the Constraint to Two Sectors
Given that solutions will satisfy the constraint 0=∑_ir⃗_i, we can write down algebraic/trigonometric expressions for the variables of the 3rd sector r⃗_3=-r⃗_1-r⃗_2 in terms of the 1st and 2nd sectors. Substituting these relations into the i=1,2 equations
would leave the system (<ref>-<ref>) for i=1,2 only, and the equations for ℓ̇_i,ė⃗̇_i would all be 𝒪(δ).
In particular, if r⃗_3=-r⃗_1-r⃗_2, then we can write expressions for the radius and trig ratios of the argument of r⃗_3 in terms of those of r⃗_1,r⃗_2. First of all, by the cosine-law
r_3^2=r_1^2+r_2^2+2 r_1r_2cos(θ_2-θ_1)
and then from basic trigonometry, we can express cosθ_3,sinθ_3 as follows
cosθ_3=-r_1cosθ_1+r_2cosθ_2/r_3
sinθ_3=-r_1sinθ_1+r_2sinθ_2/r_3
These may then be worked into the equations (<ref>,<ref>), still using r_i=α^-1ℓ_i^2/(1+e^r_i) but now only for i=1,2. This results in a system of first order ODEs θ̇_1=Ω_1, θ̇_2=Ω_2, ℓ̇_1=τ_1, ℓ̇_2=τ_2,ė⃗̇_1=v⃗_1,ė⃗̇_2=v⃗_2 where the angular velocities Ω_i are 𝒪(1), but the torques τ_i and eccentricity-velocities v⃗_i are 𝒪(δ), facilitating a multiple-scales analysis.
§ AUXILIARY VECTORS F⃗=R̂+E⃗
The expressions for the right-hand-sides of (<ref>-<ref>), especially the torques and eccentricity-velocities, in terms of the variables (θ,ℓ,e^r,e^θ), are very large and cumbersome. Written as rational functions, the numerators and denominators expanded out, are respectively 16 and 9 terms for the torques and 222 and 9 terms for the eccentricity-velocities, not including terms within 3/2-roots of the denominators.
The prevalence of the combination r̂_i+e⃗_i presents an opportunity for simplification. Observe that the denominator of the expression for the osculating orbit (<ref>) is the r̂_i-component of this vector
r=ℓ^2/α/1+e^r=ℓ^2/α/(r̂+e⃗ )·r̂.
The combination r̂_i+e⃗_i also occurs in the DEs for eccentricity vectors (<ref>). If we change variables from eccentricity vectors e⃗_i to these auxiliary vectors f⃗_i=r̂_i+e⃗_i, the modified equations become
r_i =ℓ_i^2/α f_i^r=ℓ_i^2/α f⃗_i·r̂_i=ℓ_i^2/α f_icos(ψ_i-θ_i)
ḟ⃗̇_i =τ_i/ℓ_if⃗_i-ℓ_i/αμ_i(Tλ⃗)+d/dtr̂_i
=τ_i/ℓ_if⃗_i-ℓ_i/αμ_i(Tλ⃗)+Ω_i θ̂_i
where f_i=f⃗_i and ψ_i is the argument of f⃗_i (such that f⃗_i/f_i=r̂(ψ_i)). Let the point be laboured, that given this definition of f⃗_i, the differential equation is
ḟ⃗̇_i=d/dtr̂_i+𝒪(δ).
So to leading order, the vector f⃗_i will be very nearly equal to the unit vector r̂_i(t)=r̂(θ_i(t)). Indeed, the solution is exactly f⃗_i=r̂_i+e⃗_i. So solutions will have f_i∼ 1 and ψ_i∼θ_i for small initial eccentricity, at least for a finite duration after initial conditions.
§.§ Geometric relationship of the Auxiliary vectors
For ellipses with even moderate eccentricity, up to ∼0.5, the vector -f⃗=-r̂-e⃗ points, to leading order, from the position of the planet towards the center of the ellipse. Indeed, the vector rf⃗=r⃗+re⃗ is such that
r⃗-rf⃗=-re⃗,
while the coordinate of the center of the ellipse is -ae⃗, where a is the semi-major axis, as demonstrated in Fig. <ref>. So the degree to which these coincide is the degree to which r and a agree. In terms of eccentricity, semi-major axis and true anomaly ν=θ-β, the relationship is
r=a(1-e^2)/1+ecosν.
At minimum re=ae(1-e), while the maximum value is ae(1+e). Thus the location r⃗-rf⃗ lies within a segment of the major-axis which is a length a e^2 on either side of the ellipse center, as shown in Fig. <ref>. The error of -f⃗ pointing from the location of the planet to the center of the orbit is 𝒪(e^2). Specifically, the difference in angle of -re⃗ vs. -ae⃗ as seen from the position r⃗—in other words, the angle ∠ CPE—is e^2sinνcosν to leading order in e.
Moreover, if we consider the equation
ar̂=-ae⃗+af⃗,
then we see the following geometry: construct an circle around the focus of the ellipse, with radius a. If we continue the vector r⃗ from the focus out to radius a, we reach the point ar̂ on the circle. The position of the ellipse centre from the focus is the first term -ae⃗. Thus we see from (<ref>) that the vector af⃗ points from the centre of the ellipse to the position of the orbit projected radially to radius a, as shown in Fig. <ref>.
§.§ The Dynamical System in terms of the Auxiliary vectors
What might simplify the equations the most is to write the equations for ḟ⃗̇_i in polar form, for the components ḟ_i^r=d/dt(r̂_i·f⃗_i) and ḟ_i^θ=d/dt(θ̂_i·ḟ⃗̇_i). That is, we have
f⃗_i =f_i^rr̂_i+f_i^θθ̂_i
and ḟ⃗̇_i =ḟ_i^rr̂_i+f_i^r(Ω_iθ̂_i) +ḟ_i^θθ̂_i+f_i^θ(-Ω_ir̂_i)
=(ḟ_i^r-Ω_i f_i^θ)r̂_i+(ḟ_i^θ+Ω_i f_i^r)θ̂_i
The equation (<ref>) for ḟ⃗̇_i becomes
ḟ_i^r =τ_i/ℓ_if_i^r+Ω_i f_i^θ-ℓ_i/αμ_i(r̂_i· Tλ⃗)
ḟ_i^θ =τ_i/ℓ_if_i^θ-Ω_i f_i^r-ℓ_i/αμ_i(θ̂_i· Tλ⃗)+Ω_i.
We know the components of Tλ⃗ from (<ref>)[Multiplying (<ref>) by -T, we can also find λ⃗=μ_i/r_i(σ_ir̂_i+τ_iθ̂_i).]
r̂_i·(-Tλ⃗)=μ_i τ_i/r_i
θ̂_i·(-Tλ⃗)=-μ_i σ_i/r_i
which gives
ḟ_i^r =τ_i/ℓ_if_i^r+ℓ_iτ_i/α r_i+Ω_i f_i^θ
ḟ_i^θ =τ_i/ℓ_if_i^θ-ℓ_iσ_i/α r_i+Ω_i(1-f_i^r) .
§.§ Final substitutions
Things may be made yet more concise. The osculating orbits are given by r_i=ℓ_i^2/α f_i^r, and the prevalent coefficients in (<ref>,<ref>) become
Ω_i =ℓ_i/r_i^2=α^2f_i^r^2/ℓ_i^3
ℓ_i/α r_i =f_i^r/ℓ_i
so that we find
ḟ_i^r =2τ_i/ℓ_if_i^r+α^2f_i^r^2/ℓ_i^3f_i^θ
ḟ_i^θ =(-σ_i/ℓ_i+α^2f_i^r(1-f_i^r)/ℓ_i^3)f_i^r+τ_i/ℓ_if_i^θ.
For completeness, we give the expressions for σ and τ in these terms
τ_i =α^2δ/μ_iℓ_i^2/f_i^r∑_j≠ if_j^r^2/ℓ_j^4sin(θ_j-θ_i)
σ_i =α^2δ/μ_i[f_i^r/ℓ_i^2+ℓ_i^2/f_i^r∑_j≠ if_j^r^2/ℓ_j^4cos(θ_j-θ_i)].
Equations (<ref>,<ref>) are best read in the 3-sector version of this problem (that is, if one does not eliminate r_3 and θ_3). If one wishes to work in only two sectors i=1,2, the substitutions (<ref>,<ref>,<ref>) should be made into (<ref>) and (<ref>).
We make the substitutions (<ref>,<ref>) into (<ref>) and (<ref>) as well, which become
ė⃗̇_i =τ_i/ℓ_i(e⃗_i+r̂_i)+f_i^r/ℓ_i(τ_ir̂_i-σ_iθ̂_i)
ḟ⃗̇_i =ė⃗̇_i+Ω_iθ̂_i
=τ_i/ℓ_if⃗_i+f_i^r/ℓ_i(τ_ir̂_i-σ_iθ̂_i)+α^2f_i^r^2/ℓ_i^3θ̂_i
In full then, the system of equations, in terms of the polar components of the auxiliary vectors f_i^r,f_i^θ are
θ̇_̇i̇ =Ω_i=α^2f_i^r^2/l_i^3
ℓ̇_i =τ_i=(<ref>)
ḟ_i^r =2τ_i/ℓ_if_i^r+α^2f_i^r^2/ℓ_i^3f_i^θ
ḟ_i^θ =(-σ_i/ℓ_i+α^2f_i^r(1-f_i^r)/ℓ_i^3)f_i^r+τ_i/ℓ_if_i^θ.
Numerical solutions of (<ref>-<ref>) for Jupiter's eccentricity vector are shown in Fig.
§ OTHER FORMULATIONS
For completeness, we present the following forms of the equations for both ė⃗̇_i and ḟ⃗̇_i. First of all, for ė⃗̇_i in the polar coordinates as above
ė_i^r =2τ_i/ℓ_i(1+e^r_i)+α^2(1+e_i^r)^2/ℓ_i^3e_i^θ
ė_i^θ =-(σ_i/ℓ_i+α^2e_i^r(1+e_i^r)/ℓ_i^3)(1+e_i^r)+τ_i/ℓ_ie_i^θ .
Of course, these are related to (<ref>,<ref>) as the components of e⃗_i,f⃗_i are related by
f_i^r=1+e_i^r, f_i^θ=e_i^θ.
Another option, using either the eccentricity or auxiliary vectors, is to express them in their own polar coordinates; that is,
e⃗_i(t) =e_i(t)r̂(β_i(t))
f⃗_i(t) =f_i(t)r̂(ψ_i(t)).
Here, e_i are the eccentricities (of the osculating orbits) and β_i are the longitudes of periapsis, whereas f_i are the lengths of the auxiliary vectors and ψ_i are the angles which f⃗_i make with the fixed reference (positive x-axis, for instance). These definitions give
ė⃗̇_i=ė_i r̂_β_i+e_i β̇_i θ̂_β_i
ḟ⃗̇_i=ḟ_i r̂_ψ_i+f_i ψ̇_̇i̇ θ̂_ψ_i
where here the notation r̂_× has been used for r̂(×(t)). From these, we derive differential equations for norms (e_i,f_i) and arguments (β_i,ψ_i) by taking r̂_β_i,θ̂_β_i-components[These are derived with the following helpful properties of the r̂,θ̂ functions: r̂(a)·r̂(b)=θ̂(a)·θ̂(b)=cos(a-b), r̂(a)·θ̂(b)=sin(a-b).] of (<ref>) and r̂_ψ_i,θ̂_ψ_i-components of (<ref>)
ė_i =r̂_β_i·ė⃗̇_i=τ_i/ℓ_i[e_i+(1+f_i^r)cos(θ_i-β_i)]+σ_i/ℓ_if_i^rsin(θ_i-β_i)
e_i β̇_i =θ̂_β_i·ė⃗̇_i
= τ_i/ℓ_i(1+f_i^r)sin(θ_i-β_i) -σ_i/ℓ_if_i^rcos(θ_i-β_i)
while the equations for f_i,ψ_i are
ḟ_i=r̂_ψ_i·ḟ⃗̇_i
=τ_i/ℓ_i(f_i+f_i^rcos(ψ_i-θ_i))+f_i^r/ℓ_i(α^2f_i^r/ℓ_i^2-σ_i)sin(ψ_i-θ_i)
f_i ψ̇_i=θ̂_ψ_i·ḟ⃗̇_i
= -τ_if_i^r/ℓ_isin(ψ_i-θ_i) + f_i^r/ℓ_i(α^2f_i^r/ℓ_i^2 - σ_i) cos(ψ_i-θ_i) .
The systems {θ̇_i,ℓ̇_i,ė_i,β̇_i } and {θ̇_i,ℓ̇_i,ḟ_i,ψ̇_i } can be closed by the relations
f_i^r=f_icos(ψ_i-θ_i)=1+e_i^r=1+e_icos(θ_i-β_i).
PART:
Perturbation Theory for the Three Body Problem
§ DELAUNAY VARIABLES - AN INTERPRETATION OF THE MEAN ANOMALY
We will not here reproduce the whole definition and derivation of action-angle variables for the two-body problem, nor the canonical transformation to what are known as the Delaunay variables. Suffice it now to say, that the Delaunay variables are a set of action-angle variables for the two-body problem, and that using these variables the Hamiltonian depends on none of the conjugate coordinates, revealing that the Hamiltonian is integrable, and the dynamics incredibly simple. Excellent references for the material can be found in <cit.>.
The two-body problem, of masses m and M orbiting each other subject to an attractive inverse-square-of-distance force F⃗=-k/r^3r⃗, where r⃗ is the relative position of one body to the other, reduces to a “1-body" problem of a mass μ=mM/(m+M), called the reduced mass, moving about a fixed centre by the same inverse-square force, and the displacement of the reduced mass from the centre is equal to the relative separation of the original bodies. This problem has the equation of motion μr̈⃗̈=-k/r⃗^3r⃗, and this may be given by the Lagrangian ℒ=12 μ|ṙ⃗̇|^2+k/r⃗; equivalently the Hamiltonian ℋ=|p⃗|^2/(2μ)-k/r⃗, where the linear momentum is p⃗=∇_ṙ⃗̇ℒ=μṙ⃗̇. The only parameter essential to the problem is the ratio α=k/μ, and in the case of the gravitational two-body problem this is GmM/μ=G(M+m), called the standard gravitation parameter (sometimes `standard' is dropped).
Many things are well known of this problem and it's solutions. It is straight forward to confirm that the angular momentum L⃗=μ r⃗×v⃗ is conserved, and that consequently the motion is planar. It is also well known that the orbits {r⃗(t)| t∈ T} are conic sections, including ellipses for negative energies E=μ v^2/2-k/r (with the fixed `centre' at one of the foci of the ellipse), which will be our focus. The physical size of elliptical orbits is characterized by the semi-major axis a, and it is also well known that the period of elliptical trajectories is given vy Kepler's Third Law: the square of an orbital period is proportional to the cube of the length of the semi-major axis
T^-2a^3=α/4π^2
or equivalently, a^3ω^2=α, where ω=2π/T=√(α/a^3), the angular frequency associated with period T, is called the mean motion.
Before describing the Delaunay variables, we introduce various `specific' quantities—quantities that are made `massless', dividing by the chosen mass scale. Thus the specific energy ξ=v^2/2-α/r, and specific angular momentum ℓ⃗=r⃗×v⃗, ℓ=|ℓ⃗|. These are both constants of the motion, given by ξ=-α/(2a) and ℓ=√(α a(1-e^2)).
With respect to a fixed orthonormal frame {x̂,ŷ,ẑ}, the Delaunay momenta are: the z-component of angular momentum ℓ^z, the total angular momentum ℓ, and finally a third action j, related to both the energy and semi-major axis, given as j=√(α a)=α/√(-2ξ). For lack of another name, I will refer to this as the impulse of the orbit. The coordinates are as follows[It should be noted that the Delaunay variables completely describe the 3D orientation of the elliptical orbit, as well as the progression of motion within the orbit. The coordinates Φ and η describe the axis of rotation of the orbital plane relative to the reference xy-plane and the position of periapsis within the orbital plane. From the momenta ℓ^z,ℓ,j can be determined a,e and α—in other words, the size, shape and speed of motion within the orbit. The actual rotation of the orbital plane seems to be missing; that is, the inclination ζ. But since the angular momentum is perpendicular to the orbital plane, we can determine the inclination by cos(ζ)=ℓ^z/ℓ.]. Conjugate to ℓ^z is the longitude of ascending node, indicated with Φ. This is the angle from the positive x-axis to the ray along which the plane of the orbit intersects the xy-plane and where ż>0—this direction is called the ascending node. Conjugate to ℓ is the argument of periapsis, η: the angle from the ascending node to the ray connecting the central body to the position of closest approach along the orbit, or periapsis, measured in the orbital plane. The final angle, conjugate to the impulse j, is the so-called mean anomaly M. Unlike the previous two coordinates, M is not so straightforward to define geometrically. We must first construct the eccentric anomaly, denoted E, by considering a circle of radius a which circumscribes the ellipse, tangent at periapsis and apoapsis (A and B respectively). The construction is shown in Fig. <ref>. The centre C of this circle coincides with the centre of the ellipse. We describe the position P of the orbiting body around the ellipse by the angle subtended at the focus O to periapsis, called the true anomaly and denoted ν
O⃗P⃗=r⃗=r(ν) r̂(ν)≡ r(ν)(cosν,sinν)
r(ν) =a(1-e^2)/1+ecosν.
From P we project to the auxiliary circle perpendicular to the major axis, arriving at a point Q. The eccentric anomaly is the angle about C from periapsis to Q, E=∠ ACQ. Now, the ellipse can be recovered from the circle by a contraction of factor (1-e^2)^1/2 parallel to the minor axis. This reveals that, working in the orbital plane and relative to the centre C, the position P is
C⃗P⃗=(acos E, bsin E),
the axes being aligned with the major and minor axes, and b=a√(1-e^2) is the semi-minor-axis.
Since the separation of the centre and focus is CO=ae, we have the relations
cos E =(1-e^2)cosν/1+ecosν+e=cosν-e/1+ecosν
sin E =√(1-e^2)sinν/1+ecosν
tanE2 =√(1-e/1+e)tanν2
and the inverse relations
cosν =cos E-e/1-ecos E
sinν =√(1-e^2)sin E/1-ecos E.
These relations give the separation of the orbit as
OP=r(E)=a(1-ecos E).
We move to constructing the mean anomaly by employing Kepler's Second Law: we know that equal area is swept out by the orbit in equal times. This can be expressed as the fact that the time-rate-of-change (t.r.o.c) of the area bounded by the ellipse 𝒜=AOP is constant; in particular
d/dt(2𝒜)=ℓ.
Owing to the aforementioned contraction, the area bounded by the auxiliary circle 𝒜̃=AOQ also grows uniformly with time
d/dt(2𝒜̃)=(1-e^2)^-1/2d/dt(2𝒜)=√(α a)=j.
Thus, we may see the significance of j as (twice) the t.r.o.c. of orbit-area projected to the auxiliary circle, just as angular momentum ℓ is twice the t.r.o.c. of area swept in the orbit itself.
The mean anomaly M is defined as an angle in the auxiliary circle, measured at the center, say to a point X on the circle, M=∠ ACX, such that the resulting sector has the same area as 𝒜̃=AOQ. At a constant radius a, we see that the uniform growth of area 𝒜̃ implies a constant rate-of-change for M
2ACX=a^2M, so a^2Ṁ =d/dt(2𝒜̃)=√(α a)
Ṁ =√(α/a^3)=ω.
The mean and eccentric anomalies can be link by a straightforward calculation
a^2M=2ACX=2AOQ =2(ACQ-OCQ)
=a^2E-2(ae)(asin E)/2
∴ M =E-esin E .
We thus have the relation
E-esin E=ω(t-τ)
where τ is a time when the orbit is at periapsis. A tantalizing equation, but unfortunately this cannot be inverted for the eccentric anomaly as elementary functions of time. If this could be done, we could write the true anomaly explicitly as a function of the mean anomaly, and thus of time. As it is, the relationship is implicit, although of course ν(M;e) is formally a well-defined function
ν(M;e)=2arctan(√(1+e/1-e)tanE(M;e)/2); 0≤ e<1
where E(M;e) is the inverse[It can be seen that this inverse is a well-defined function as follows: for e≤1 the function M(E;e)=E-esin E is increasing except for isolated points. Where M(E;e) is increasing, the inverse E(M;e) is a differentiable function. Even for e=1, on any domain dM/de≥0, with equality only at E=2kπ, k∈ℤ. This gives E(M;e) a continuous and increasing function, with dE/dM→+∞ for M∈2πℤ.] of (<ref>) for given eccentricity 0≤ e≤ 1. We can see that (<ref>) pairs E=M=kπ for k∈ℤ. If we consider E(M;e)=M+ϑ(M;e), this is ϑ=0 for M=kπ. Furthermore, ϑ(M;e) is 2π-periodic in M, and an odd function. We can thus express ϑ as a sin-Fourier series, with coefficients that are functions of e.
E(M;e)=M+ϑ(M;e)=M+∑_k≥1c_k^E(e)sin(kM)
These Fourier coefficients are c_k^E(e)=2 J_k(ke)/k, where J_k(z) are the Bessel functions of the first kind. We can see that the relationship ν(E) is just the same kind of relationship
ν(E;e)=E+∑_k≥1c_k^ν(e)sin(kE).
Finally, it is straightforward to confirm that composition of such functions is closed—these functions being the sum of the identity function and some odd 2π-periodic function. We conclude that the expression for the true anomaly in terms of the mean anomaly has the same form
ν(M;e)=M+∑_k≥1C_k(e)sin(kM)
this Fourier sine-series is known as the “equation of centre". The coefficients have a remarkable expression in terms of Bessel functions
C_k(e)=2/k{ J_k(ke)+∑_m≥1β^m[J_k-m(ke)+J_k+m(ke)]}, β=1-√(1-e^2)/e∼ e/2+⋯
These coefficients are of order C_k(e)∼𝒪(e^k); the Taylor series for the first of these functions begin
C_1(e) =2e-1/4e^3+5/96e^5+107/4608e^7+⋯
C_2(e) =5/4e^2-11/24e^4+17/192e^6+⋯
C_3(e) =13/12e^3-43/64e^5+95/512e^7+⋯
C_4(e) =103/96e^4-451/480e^6+⋯
C_5(e) =1097/960e^5-5957/4608e^7+⋯
C_6(e) =1223/960e^6+⋯
C_7(e) =47273/32256e^7+⋯
§.§ Auxiliary Circular Orbit
We now discuss two different proposals for what could be considered the “auxiliary circular orbit", auxiliary to a given elliptical orbit. The pedagogical idea is to be able to compare/connect the motion of a body in a elliptic Keplerian orbit, to a corresponding position in a related circular orbit. This has the flavour of a “method of images": the (uniformly orbiting) position in the circular orbit is an image of the position in the elliptic orbit, the motion of which is non-uniform, and perhaps more difficult to develop an intuition for. The relationship between the image-position and the elliptical position will ultimately be the geometrical one, based on the definition of the mean anomaly. Indeed, we will take the mean anomaly, as determined in the elliptical orbit, to be the angular position of a body in a circular orbit about the same centre (the focus of the ellipse).
Before proceeding, it should be pointed out that we will think primarily in the orbital plane, or equivalently in two dimensions. In two dimensions Φ and η themselves are undefined (better to say, undefinable), but their sum β=Φ+η is the well defined longitude of periapsis. Delaunay variables in two dimensions are the conjugate pairs ℓ,β and j,M.
Certainly, the obvious option that first presents itself is to take the orbit with eccentricity e^'=0 and radius a^'=a. With j=√(α a), ℓ=√(α a(1-e^2)), ω=√(α/a^3) and ξ=-α/2a, this choice gives ℓ^'=j^'=j, ω^'=ω and ξ^'=ξ. It should be noted than in this scenario we are also taking the same gravitational parameter α^'=α, ie. the same total mass in the `phantom' circular two body set-up as in the original. This certainly has simplicity to it's advantage, and that the phantom orbit has the same period as the original.
However, since the definition of M is so linked to Kepler's Second Law and the uniform t.r.o.c. of area-sweep, it would seem desirable to consider an auxiliary circular orbit which has the same angular momentum ℓ^'=ℓ as the original, as well as having the same period, requiring ω^'=ω. With e^'=0, these conditions require not only taking a circular radius different than the semi-major axis of the elliptical orbit, but also considering a change to the gravitational parameter α^'≠α. These conditions are the following
√(α^' a^') =√(α a(1-e^2)), √(α^'/a^'^3)=√(α/a^3)
the solution to which is
a^' =a(1-e^2)^1/4, α^'=α(1-e^2)^3/4.
With equal t.r.o.c. of area as well as equal orbital duration, it follows that the orbits enclose the same area: the elliptical area π ab=π a^2√(1-e^2)=πa^'^2 is equal to the area of circle with radius a^'. We thus see the sense of this auxiliary circular orbit, as the Keplerian orbit with the same period and bounding the same area. In this sense we really can say the orbits are of the same size.
That we have to consider a modified gravitational parameter is to say that we must consider the auxiliary circular orbit as having a different total mass. But this should not be problematic to us. We do well to remember that the frequency of a Keplerian orbit depends principally on the semi-major axis and the total mass of the bodies in the orbit (we will consider Newton's constant G as universal). So to take a circular orbit with a radius different from a given semi-major axis, if we want an orbit with the same frequency, we must take a different total mass. All this is to suggest that we should consider two orbital scenarios—in which one is both double the size in linear dimension and has 8 times the total mass than the other—as more similar to each other than orbits that have merely equal total mass or equal semi-major axis.
This consideration would also seem to elevate the stance of the angular momentum ℓ as somehow principal over the impulse j: we have j^'=ℓ^'=ℓ≠ j, ξ^'=ξ(1-e^2)^1/2. Indeed, we are seeing that the angular momentum is more characteristic of the orbit, as it corresponds to the actual area-sweeping rate in the physical orbit, and we preserve that in our `phantom' circular orbit, whereas j is the projected area rate in the a-radius circle that we construct around the ellipse. It is seen that this circle has far less to do with the actual physics than the above proposed auxiliary circular orbit.
§ ALTERNATE MASS PARAMETRIZATION
We have freedom to take a different parametrization of the masses and the coefficients μ_1,2,3. First note, from the mass distribution of the three masses m_J,m_S,M, that any dimensionless ratios of masses depends on two independent mass-ratios. We can form the following
β_1=m_J /M_Σ = 9.54× 10^-4, β_2=m_S/M_Σ = 2.85×10^-4
β_3=M/M_Σ = 0.99876 = 1- 1.2384×10^-3
which are constrained by β_1+β_2+β_3=1 .
Our freedom is in the mass scale m̃ we take to divide the Lagrangian (<ref>), giving the coefficients
μ_1=Mm_J/M_Σm̃, μ_2=Mm_S/M_Σm̃, μ_3=m_Jm_S/M_Σm̃
If we take m̃=λ m_J for some λ>0 (originally we had λ=M/M_Σ=β_3), then these are
μ_1=β_3/λ, μ_2=β_2β_3/β_1λ, μ_3=β_2/λ, δ=β_2β_3/λ
We can choose λ such that δ=1-β_3=β_1+β_2=(m_J+m_S)/M_Σ, in which case μ_3=δ/β_3=δ/(1-δ). This is
λ=β_2β_3M_Σ/m_J+m_S=β_3m_S/m_J+m_S
and this gives
μ_1=m_J+m_S/m_S, μ_2=m_J+m_S/m_J
which are related by 1/μ_1+1/μ_2=1. In terms of the original parameter μ=m_S/m_J∼ 0.3, these are μ_1=1+μ^-1 and μ_2=1+μ. For the masses of Jupiter and Saturn, these are μ_1=4.340=413+6.452×10^-3 and μ_2=1.2994=1.3-5.796×10^-4. This new mass scale is m̃=m_S/m_J+m_SM/M_Σm_J=Mm_Jm_S/M_Σ(m_J+m_S). We will redefine ϵ≡μ_3=δ/(1-δ)=(m_J+m_S)/M. Notice that δ is always 0≤δ≤1. For the planetary regime, certainly we would say m_J+m_s≤ M, so that δ≤1/2 and ϵ≤1.
It is instructive to return to the coefficient matrix of the equations (<ref>)
A =G([ -(M+m_J) m_S m_S; m_J -(M+m_S) m_J; M M -(m_J+m_S) ])
=α([ β_2-1 β_2 β_2; β_1 β_1-1 β_1; β_3 β_3 β_3-1 ])
=α([ -1+δ/μ_1 δ/μ_1 δ/μ_1; δ/μ_2 -1+δ/μ_2 δ/μ_2; 1-δ 1-δ -δ ])
§ HAMILTON'S EQUATIONS
We may now proceed to consider the asymptotic analysis of the three body problem, in the planetary case: when two bodies (the planets) orbit a third, and their masses are also much smaller than the third, but not so much that the gravitational attraction between them is negligible. Using the Delaunay variables defined in the three sectors, we have the Hamiltonian
𝐇_λ(J_i,h_i,h^z_i,M_i,η_i,Φ_i,λ⃗) =∑_i=1^3{ - μ_i^3 α^2/2J_i^2- λ·r⃗_i}
=∑_i=1^3{ - μ_i^3 α^2/J_i^2}-λ⃗·(r⃗_1+r⃗_2+r⃗_3)
where the Delaunay momenta are the relative variables (as opposed to specific)
J_i=μ_i j_i , h_i=μ_i ℓ_i , h^z_i=μ_i ℓ^z_i .
We notice that, with the exception of the impulses, the Delaunay variables enter into this Hamiltonian via the geometry, in the vector positions r⃗_i. The vector position in the i^th sector is
r⃗_i=J_i^2/αμ_i^2(1-e_icosE_i)r̂_i
The eccentricity is given by h_i^2/J_i^2=1-e_i^2, and the direction vector r̂_i is derived by the following sequence of rotations: we start with the unit vector in a fixed reference, say x̂-direction. This is then rotated ccw about the z-axis by both the true anomaly (related to the eccentric) and the argument of periapsis, ν+η. At this stage we can imagine we have an elliptic orbit with periapsis at argument η and ascending node at the positive x-axis, but no inclination. We need to rotate ccw about the x-axis by the inclination ζ_i. Note that the rotation by ζ_i is given by cosζ_i=h^z_i/h_i and sinζ_i=√(1-h^z_i^2/h_i^2). Finally, we rotate again about the z-axis, bringing the ascending node to longitude Φ_i. Thus r̂_i is
r̂_i=R^z_Φ_iR^x_ζ_iR^z_ν_i+η_i[1,0,0]^T .
Now, the components as a result of the rotation by true anomaly, are given in terms of the eccentric by
R^z_ν_i[1,0,0]^T=[cos E_i+e_i/1-e_icos E_i,√(1-e_i^2)sin E_i/1-e_icos E_i,0]^T .
So we can see the i^th position vector is
r⃗_i=J_i^2/αμ_i^2 R^z_Φ_iR^x_ζ_iR^z_η_i[cos E_i+e_i,√(1-e_i^2)sin E_i,0]
The components of the remaining rotation matrix are
R^z_Φ R^x_ζ R^z_η=(
[ cosΦcosη-sinΦcosζsinη -cosΦsinη-sinΦcosζcosη sinΦsinζ; sinΦcosη+cosΦcosζsinη cosΦcosζcosη-sinΦsinη -cosΦsinζ; sinζsinη sinζcosη cosζ ])
From the Hamiltonian (<ref>), Hamilton's equation are
Ṁ_i =∂𝐇_λ/∂ J_i=μ_i^3α^2J_i^-3+∂𝐑/∂ J_i J̇_i =μ_ij̇_i=-∂𝐑/∂ M_i
η̇_i =∂𝐑/∂ h_i ḣ_i =μ_iḣ_i=-∂𝐑/∂η_i
Φ̇_i =∂𝐑/∂ h^z_i ḣ^z_i =μ_iḣ^z_i=-∂𝐑/∂Φ_i
0 =∇_λ⃗𝐇_λ=S⃗
for i=1,2,3, where 𝐑=-λ⃗·S⃗ is the function by which the Hamiltonian is perturbed and S⃗=r⃗_1+r⃗_2+r⃗_3 is the constraint. We know that when the equations are satisfied subject to the constraint, λ⃗ takes the values (<ref>) as determined in the first section, and thus also determined by the geometry (<ref>,<ref>). Now, we must be careful with the partial derivatives of the perturbation. At first blush, we should regard both λ⃗ and S⃗ as functions of the coordinates. If χ is one of the canonical variables, then the derivative of the perturbation with respect to χ is
-∂𝐑/∂χ_i=∂λ⃗/∂χ·S⃗ + λ⃗·∂S⃗/∂χ
In particular, we compute the potential ∂_χS⃗ without regard to the constraint, but by the function form of the constraint S⃗=r⃗_1+r⃗_2+r⃗_3. If χ=χ_i is a variable of the i^th sector, then this is ∂_χ_ir⃗_i. We might similarly determine the derivative ∂_χ_iλ⃗ by combining (<ref>,<ref>,<ref>), but at this point we may evaluate (<ref>) subject to constraint S⃗=0, so that the first term vanishes. Thus
∂𝐑/∂χ_i = -λ⃗·∂r⃗_i/∂χ_i.
For some of the derivatives we have the following (suppressing indices i), using c_ζ=cosζ and s_ζ=sinζ,
M =(1-ecos E)dE
∂_M =(1-ecos E)^-1∂_E
∂_J
= ∂_J|_e + ∂e/∂J ∂_e
= ∂_J|_e + 1-e^2/eJ ∂_e
∂_h
= ∂e/∂h ∂_e + ∂c_ζ/∂h ∂_c_ζ + ∂s_ζ/∂h ∂_s_ζ
= -1-e^2/eh ∂_e - c_ζ/h ∂_c_ζ + c_ζ^2/s_ζh ∂_s_ζ
∂_h^z
= ∂c_ζ/∂h^z ∂_c_ζ + ∂s_ζ/∂h^z ∂_s_ζ
= c_ζ/h^z ∂_c_ζ - c_ζ^2/s_ζh^z ∂_s_ζ.
These will be instrumental to formulating the KAM theory for this perturbational problem.
§ SETUP FOR KAM THEORY
KAM Theory
is a perturbative approach for nearly integrable Hamiltonians, which are a perturbation away from depending on only the momenta J̅∈ℝ^n
𝐇(J̅,θ̅;δ)=𝐇_0(J̅)+ 𝐇(J̅,θ̅;δ)∼𝐇_0(J̅)+∑_k≥1δ^k𝐇_k(J̅,θ̅).
One seeks a nearly-identical canonical transformation 𝒞:(J̅,θ̅)↦(J̃,θ̃), given by a generating function
Ψ(J̃,θ̅;δ) =J̃·θ̅ + Ψ(J̃,θ̅;δ)
∼J̃·θ̅ + ∑_k≥1δ^kΨ_k(J̃,θ̅),
which gives
J̅=∇_θ̅Ψ =J̃+∇_θ̅Ψ(J̃,θ̅;δ)
∼J̃ + ∑_k≥1δ^k ∇_θ̅Ψ_k
θ̃=∇_J̃Ψ =θ̅+∇_J̃Ψ(J̃,θ̅;δ)
∼θ̅ + ∑_k≥1δ^k ∇_J̃Ψ_k.
The goal of this transformation is that the Hamiltonian, in terms of the new coordinates, depends only on the new momenta
𝐇∘𝒞^-1 (J̃,θ̃;δ)=𝐇̃(J̃;δ)
Practically speaking, if we only do this to so many terms, say truncating 𝐇=∑_k=1^Nδ^k𝐇_k and Ψ=∑_k=1^Nδ^kΨ_k, then the transformed Hamiltonian only depends on new coordinates θ̃ at the N+1 order in δ
𝐇̃(J̃,θ̃;δ)=𝐇̃_N(J̃;δ)+𝒪(δ^N+1)(J̃,θ̃;δ).
We will need to decompose the perturbations into the average over angle-variables θ̅
⟨𝐇⟩(J̅;δ) =(2π)^-n𝐇(J̅,θ̅;δ) ^nθ̅
⟨𝐇⟩_k(J̅;δ) =(2π)^-n𝐇_k(J̅,θ̅;δ) ^nθ̅
and the remainders 𝐅=𝐇-⟨𝐇⟩, 𝐅_k=𝐇_k-⟨𝐇⟩_k.
Starting with just one order in δ, N=1, writing Ω̅_0(J̅)=∇_J̅𝐇_0 for the frequency functions of the unperturbed Hamiltonian, we find
𝐇(J̅,θ̅;δ) ∼𝐇_0(J̃+∇_θ̅Ψ)+δ 𝐇_1(J̃+∇_θ̅Ψ,θ̅)+⋯
∼𝐇_0(J̃+δ ∇_θ̅Ψ_1+⋯)
+δ ⟨𝐇⟩_1(J̃+δ ∇_θ̅Ψ_1+⋯)
+δ 𝐅_1(J̃+δ∇_θ̅Ψ_1+⋯,θ̅)+⋯
∼𝐇_0(J̃)+δ Ω̅_0(J̃)·∇_θ̅Ψ_1+δ ⟨𝐇⟩_1(J̃)+δ 𝐅_1(J̃,θ̅)+𝒪(δ^2)
Thus we have
𝐇̃_1(J̃;δ)=𝐇_0(J̃)+δ ⟨𝐇⟩_1(J̃).
That is, the functional form of the transformed Hamiltonian, as a function of the new momenta, is the sum of the unperturbed Hamiltonian and the average of the perturbation over all angle variables, to leading order. This is achieved by matching the remaining terms at first order
Ω̅_0(J̃)·∇_θ̅Ψ_1(J̃,θ̅)+𝐅_1(J̃,θ̅)=0
Expanding Ψ_1, 𝐅_1 in Fourier multi-series in θ̅, with coefficients Ψ_1^k̅(J̃), F_1^k̅(J̃), (<ref>) becomes (suppressing dependence on J̃)
∑_k̅∈ℤ^n\{0}{(iΩ̅_0·k̅ Ψ_1^k̅+F_1^k̅)e^ik̅·θ̅}=0
So the solution is
Ψ_1(J̃,θ̅)=i∑_k̅∈𝒮_1F_1^k̅(J̃)/k̅·Ω̅_0(J̃) e^ik̅·θ̅
where 𝒮_1⊂ℤ^n is the set of multi-indices for which 𝐅_1 has non-zero Fourier coefficient. Here we finally see a problem: that if ever the frequency vector is orthogonal to one of these integer multi-indices k̅·Ω̅_0=0, then the solution (<ref>) breaks down. This is the problem of resonance, and it requires a modification called resonant perturbation theory. We have a special case of this problem: for our Hamiltonian (<ref>), the unperturbed terms
-μ_1^3α^2/J_1^2-μ_2^3α^2/J_2^2
do not depend on the third impulse J_3 nor any of the angular momenta, which only come into the Hamiltonian through the perturbing function 𝐑. This results in the corresponding components of frequency vector being identically zero. In other words, our Hamiltonian is degenerate, and the solution is called degenerate perturbation theory. What we need to do, not only decomposing the perturbation into it's average-over-all-angles and remainder, but further decomposing the remainder into it's average over the mean anomalies—the angles whose momenta the 0th order Hamiltonian depends on (it will be seen that M_3 can be included, and the integrable term for the third sector can be included with the (<ref>))—and remainder from that. If 𝐑=-λ⃗·S⃗=⟨𝐑⟩+𝐅
⟨𝐑⟩(J_i,h_i,h^z_i)=(2π)^-9 𝐑 ^3 M ^3η ^3Φ
𝐅(J_i,h_i,h^z_i,M_i,η_i,Φ_i)=𝐑-⟨𝐑⟩.
Then we further decompose 𝐅= 𝐅_M+𝐅̃
𝐅_M(J_i,h_i,h^z_i,η_i,Φ_i)=(2π)^-3 𝐅 ^3M
𝐅̃(J_i,h_i,h^z_i,M_i,η_i,Φ_i)=𝐅_M-𝐅.
So 𝐑=⟨𝐑⟩+𝐅_M+𝐅̃.
Then we seek a canonical transformation to a new Hamiltonian that doesn't depend on the mean anomalies. This will be elaborated in future work.
§ CONCLUSIONS
In this work, we have presented a self-contained analysis of the gravitational three-body problem in the planetary scenario. This is done in heliocentric coordinates with the use of Lagrange multipliers to elegantly handle the form of the Lagrangian. First working in two dimensions, we develop the equations of motion as a first order dynamical system in longitudes θ_i, angular momentum ℓ_i and eccentricity vectors e⃗_i. Auxiliary vectors f⃗_i=r̂_i+e⃗_i greatly simplify the equations algebraically, especially in terms of the components f_i^r,f_i^θ.
We then give the definition and construction of the Delaunay action-angle variables for the two body problem. We present a novel conceptualization of the mean anomaly of an eccentric orbit as the true anomaly of a `phantom' or `image' body, also orbiting the central mass, but in a circular orbit with radius reduced from the semi-major axis a^'=a(1-e^2)^1/4 as well as reduced gravitational parameter α^'=α(1-e^2)^3/4. These trajectories bound equal areas within the orbital plane, and the orbits have the same orbital frequency and angular momentum, as opposed to energy. We then develop the Hamiltonian for the problem, and investigate the geometry of the orbital positions r⃗_i in terms of the elements, in order to express the functional form of the Hamiltonian perturbation 𝐑=-λ⃗·(r⃗_1+r⃗_2+r⃗_3). Hamilton's equations are derived (<ref>-<ref>), and derivatives with respect to Delaunay variables are written down in terms of derivatives with respect to other elements E,e, and c_ζ,s_ζ. Finally, we considered the Hamiltonian perturbation theory of the problem, and learned that we need to proceed via a careful approach, seeing that our Hamiltonian is degenerate—not varying with all of the Delaunay momenta when δ=0.
There is much room for further work. The detailed work of the KAM theory analysis for this specific problem can now be begun. Of particular interest is the Great Inequality of Jupiter and Saturn: to identify the term or terms that correspond to this perturbation that is unexpectedly large in both period and amplitude. The equations in two dimensions, in terms of the auxiliary vectors, are interesting in their own right, and might be amenable to a multiple-scales asymptotic approach, as may be the Hamiltonian equations themselves, KAM theory aside. Furthermore, the general-relativist corrections to this work are of great interest. This would presumably be approached via the post-Newtonian formalism.
99
principia Newton, Isaac. Philosophiae Naturalis Principia Mathematica. 1687, London.
laplace Laplace, Pierre-Simon. Théorie de Jupiter et de Saturne. Memoire de l’Academie des Sciences de Paris 1788, 33-160.
wilson Wilson, Curtis. The Great Inequality of Jupiter and Saturn: From Kepler to Laplace. 1985, Springer-Verlag.
hill1 Hill, G.W. Notes on the Theories of Jupiter and Saturn. The Analyst 1881 8(2), 33-40.
hill2 Hill, G.W. On the Extension of Delaunay’s Method in the Lunar Theory to the General Problem of Planetary Motion. Trans. Amer. Math. Soc. 1900 1(2), 205-242.
quarles Musielak, Z E and Quarles, B. The Three-Body Problem. Rep. Prog. Phys. 2014 77 065901.
kam Arnold, V.I. Proof of a Theorem by A.N. Kolmogorov on the invariance of quasi-periodic motions under small perturbations of the Hamiltonian. Russ. Math. Survey 1963 18, 13-40.
poincare Poincaré, Henri. Méthodes Nouvelles de la Mécanique Céleste, vol 1-3. 1892-99, Paris: Gauthier-Villars.
chaos Feldman, David. Chaos and Dynamical Systems. 2019, Princeton Univ. Press. ISBN: 9780691161525
brouke_lass Brouke, R and Lass, H. A Note on Relative Motion in the General Three-Body Problem. Celestial Mechanics 1973 8(1), 5-10.
news1 Hunt, Katie and Strickland, Ashley. “Jupiter and Saturn's 'great conjunction' captured in stunning images.” CTVNews.ca, Dec. 22, 2020. <https://www.ctvnews.ca/sci-tech/jupiter-and-saturn-s-great-conjunction-captured-in-stunning-images-1.5241665>
news2 Byrd, Deborah and McClure, Bruce. “All you need to know: 2020’s great conjunction of Jupiter and Saturn.” EarthSky.org, Dec. 21, 2020. <https://earthsky.org/astronomy-essentials/great-jupiter-saturn-conjunction-dec-21-2020>
celletti Celletti, Alessandra. Perturbation Theory in Celestial Mechanics. 2007, obtained from <https://web.ma.utexas.edu/mp_arc/c/07/07-303.pdf>
morbidelli Morbidelli, Alessandro. Modern Celestial Mechanics. 2011, obtained from <https://www-n.oca.eu/morby/celmech.pdf>
elements Seidelmann, K.P., ed. Explanatory Supplement to the Astronomical Almanac. 1992 University Science Books, Mill Valley, California.
|
http://arxiv.org/abs/2307.04952v1 | 20230711004659 | Compact Twice Fusion Network for Edge Detection | [
"Yachuan Li",
"Zongmin Li",
"Xavier Soria P.",
"Chaozhi Yang",
"Qian Xiao",
"Yun Bai",
"Hua Li",
"Xiangdong Wang"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Compact Twice Fusion Network for Edge Detection]Compact Twice Fusion Network for Edge Detection
1]Yachuan [email protected]
1]Zongmin [email protected]
2,5]Xavier Soria [email protected]
1]Chaozhi [email protected]
1]Qian [email protected]
1]Yun [email protected]
3]Hua [email protected]
[4]Xiangdong [email protected]
[1] College of Computer Science and Technology,China University of Petroleum (East China), Changjiang West Road,Qingdao, 266500, Shandong, China
[2] Faculty of Educational Science, Humanities, and Technology,National University of Chimborazo, Av. Eloy Alfaro, Riobamba, 060110, Chimborazo, Ecuador
[3]Institute of Computing Technology Chinese Academy of Sciences, South Road Zhongguancun, Beijing, 100190, China
[4]Physical Education Institute, Jimei University, Yinjiang Rd, Xiamen, 361021, Fujian, China
[5] CIDIS,ESPOL Polytechnic University, Campus Gustavo Galindo, Guayaquil, 090112, Guayas, Ecuador
The significance of multi-scale features has been gradually recognized by the edge detection community. However, the fusion of multi-scale features increases the complexity of the model, which is not friendly to practical application. In this work, we propose a Compact Twice Fusion Network (CTFN) to fully integrate multi-scale features while maintaining the compactness of the model. CTFN includes two lightweight multi-scale feature fusion modules: a Semantic Enhancement Module (SEM) that can utilize the semantic information contained in coarse-scale features to guide the learning of fine-scale features, and a Pseudo Pixel-level Weighting (PPW) module that aggregate the complementary merits of multi-scale features by assigning weights to all features.
Notwithstanding all this, the interference of texture noise makes the correct classification of some pixels still a challenge. For these hard samples, we propose a novel loss function, coined Dynamic Focal Loss, which reshapes the standard cross-entropy loss and dynamically adjusts the weights to correct the distribution of hard samples. We evaluate our method on three datasets, i.e., BSDS500, NYUDv2, and BIPEDv2. Compared with state-of-the-art methods, CTFN achieves competitive accuracy with less parameters and computational cost.
Apart from the backbone, CTFN requires only 0.1M additional parameters, which reduces its computation cost to just 60% of other state-of-the-art methods.
The codes are available at https://github.com/Li-yachuan/CTFN-pytorch-masterhttps://github.com/Li-yachuan/CTFN-pytorch-master.
[
*
October 2023
================
§ INTRODUCTION
The purpose of edge detection is to extract object boundaries and salient edges from natural images to preserve the key information and ignore insignificant details. Therefore, it is considered a fundamental task in computer vision and plays an important role in higher-level tasks such as salient detection <cit.>, semantic segmentation <cit.>, and depth map prediction <cit.>.
Edge detection which divides the pixels into edge and non-edge is a sub-task of semantic segmentation. Therefore, the pixels classification is the essential question in edge detection. The pioneering works finish this work by local features, such as brightness, gradient and color. The lack of global information limits the performance of edge detection methods, until the advent of Holistically-nested Edge Detection(HED) <cit.>. As the pioneer of contemporary edge detection, HED first introduces the deep supervision mechanism to edge detection and learns multi-scale predictions holistically. On this basis, a series of excellent methods are produced <cit.>.
Although the performance of edge detection has been significantly improved, these methods suffer from two major issues. Firstly, the number of model parameters increases sharply. With the further exploration of multi-scale features, the model parameters are also greatly increased, which is unable to meet the demands of downstream tasks. In most situations, spending a mass of space and computing resources for a little accuracy improvement doesn't make sense. Secondly, lack of attention to hard samples. Hard samples refer to the pixels whose classification probability is significantly different from the ground truth, that is, the pixels prone to misclassification. As shown in Fig. <ref>, the confidence of the edges in the red box decreases due to texture interference, while the textures in the green box are mistaken for edges. These are two typical examples of hard samples. Hard samples determine the ceiling of detection accuracy, so extra attention should be attached to them.
Aiming to fully exploit multi-scale feature fusion and avoid the sharply increased parameters, we introduce a Compact Twice Fusion Network (CTFN) for edge detection, in which higher quality edges are obtained by fusing the multi-scale features twice. A lightweight Semantic Enhancement Module (SEM) is introduced in the first feature fusion. In SEM, high-level semantic information is used to increase the receptive field of fine-scale features, thereby improving the discrimination of fine-scale branches. However, SEM is a cascade structure based on FPN <cit.>, in which the high-level semantic information is gradually attenuated in the process of transmission, so the second feature fusion is required to aggregate all scale information. In the second feature fusion, we introduce a Pseudo Pixel-level Weighting (PPW) module, which sets the weights of multi-scale features according to their context information and further reduces the module complexity by decomposing the weights into the spatial weights and the channel weights.
To further enhance the attention for the hard samples, we propose Dynamic Focal Loss (DFL). DFL reshapes the standard cross-entropy loss and dynamically adjusts the weight of the loss assigned to hard samples. Increasing the weights of hard samples is an effective method to optimize hard samples, but effectively determining the hard samples become a new problem. Focal loss <cit.> discriminates hard samples by the gap between model output and ground truth. However, due to the existence of randomly initialized modules, the output of the model is chaotic at the initial stage of training, so it is unreasonable to identify hard samples by early output. Therefore, DFL distrusts the model output at first and dynamically increases the confidence margin to reduce the adverse effects caused by the early chaos of the model.
The main contributions of the paper are summarized below:
* We systematically analyze the existing deep-learning based edge detection methods and find two urgent problems.
* We propose a Compact Twice Fusion Network (CTFN) that fully fuse multi-scale features while maintaining model compactness. CTFN utilizes only 0.1M additional parameters beyond the backbone, resulting in a 60% reduction in computational cost compared to state-of-the-art methods.
* For the hard samples in edge detection, we introduce Focal Loss for the first time and propose a Dynamic Focal Loss to solve the chaotic output problem in the early training stage of Focal Loss.
* Extensive experiments are conducted on BSDS500, NYUDv2, and BIPEDv2 datasets to demonstrate the effectiveness and robustness of our method.
§ RELATED WORK
The origin of edge detection can be traced back to the last century. Early pioneering methods <cit.> mainly focus on local cues, which prevents methods from distinguishing between texture and edge.
In recent years, deep learning based methods come from behind. Edge detection has entered the era of deep learning. We review the recent development of deep learning-based edge detection methods in terms of model structure and loss function.
§.§ Model structure
Holistically-nested Edge Detection (HED) <cit.> introduces the deep supervision mechanism to edge detection and learns multi-scale predictions holistically. Inspired by the great success of HED, the most recent methods <cit.> are committed to end-to-end learning and enrichment of global features. Multi-scale information in this kind of method is almost independent and final edge maps are obtained only by taking the weighted sum of multi-scale information, as shown in Fig. <ref>. Such methods can be called HED-based methods. It is worth mentioning that although BDCN <cit.> appears to have twice feature fusions, the gradient of the first feature fusion is truncated, so we still regard it as a HED-based method. The success of HED-based methods is undoubted, but the lack of semantic information in fine-scale features due to multi-scale features independence is an open problem worthy of study.
For the aforementioned problem, Wang <cit.> introduces a new fusion method, in which semantic information from coarse-scale is utilized to facilitate fine-scale feature learning. This kind of method has been further developed in recent years <cit.>. Since the structure of these methods is similar to U-Net <cit.>, they can be termed UNet-based methods, as mentioned in Fig. <ref>. UNet-based methods pay more attention to fine-scale features, while the coarse-scale semantic information gradually decay in the process of feature fusion.
A natural idea is to merge the two structures to fuse the features further. These merged structures are what we refer to as Multiple Feature Fusion (MFF) methods, as illustrated in Fig. <ref>. FCL <cit.> fuses multi-scale features twice. In the first fusion, LSTM is introduced to address attenuation problems during the downward fusion of high-level semantic information. In the second fusion, FCL designs a pixel-level weighting module to assign weights to each feature to refine the feature fusion process. BAN <cit.> repeatedly fuses multi-scale features in both bottom-up and self-downward directions to achieve fully fused multi-scale features.
MFF methods significantly improve edge detection accuracy. However, feature fusion dramatically increases model complexity, making them less ideal for downstream tasks and real-time computing.
Inspired by the multiple feature fusion methods <cit.>, we propose CTFN, which retains the structure of multiple feature fusion while removing unnecessary high-cost modules.
CTFN not only ensures the full fusion of multi-scale features but also reduces the number of parameters and the computational cost associated with multiple feature fusion.
§.§ Loss function
Weighted Cross-Entropy is employed to supervise the learning of the network in HED <cit.>. RCF <cit.> filters out samples with disputed ground truth based on HED and leads to the most popular loss function.
Since the relatively scarcity of edge pixels compared to non-edge pixels, backpropagation gradient stabilization requires significantly larger weights for edge pixels.
And it causes the problem of blurry edges. That is, there is a wide transition region between edge and non-edge, as shown in Fig. <ref>(b). Therefore, the following works are expected to obtain crisp edges by optimizing the loss function.
Deng <cit.> believe Weighted Cross-Entropy prevents generating crisp edges and replace it with the dice coefficient and cross-entropy. In another work <cit.>, they further consider the structural differences between the output and the ground truth by the structural similarity index (SSIM) <cit.>. Huan <cit.> directly divided the image into three parts: edge pixels, confusing pixels, and non-edge pixels, and optimize them respectively.
These loss functions give a further boost to edge detection, but they all ignore the problem of hard samples mentioned above. To this end, we propose a novel Dynamic Focal Loss.
§ METHOD
Our innovation can be divided into two parts: network architecture and loss function. In this section, we describe each in detail.
§.§ Compact Twice Fusion Network
The overall architecture of our proposed CTFN is shown in Fig. <ref>, which contains three main stages: backbone, first feature fusion, and second feature fusion. Multi-scale features are obtained through the backbone, and then edges are generated through twice feature fusion.
§.§.§ Backbone
To be fair with existing methods <cit.>, the backbone is based on the VGG16 <cit.> as well. After removing the last pooling layer and fully connected layers, the 13 convolutional layers in VGG16 are divided into 5 blocks by pooling layers. The dilations in the 5th block are set with 2 to enlarge the receptive fields. Backbone is used to generate multi-scale features, which is a prerequisite for two feature fusions and is also the module with the most parameters.
§.§.§ First Feature Fusion
In edge detection, identifying the differences between edges and textures is critical but can be challenging due to their similar appearances. Often, the differentiating factor is semantic information, which is closely tied to the receptive field of the feature. The larger the receptive field, the more semantic information that can be captured, thereby enhancing the preservation of edges and suppression of textures. Consequently, increasing the feature receptive field is key to improving feature quality, particularly for fine-scale branches that have smaller receptive fields than other scales.
To address this issue, we introduce a Semantic Enhancement Module (SEM) into the first feature fusion stage. The SEM specifically targets the fine branch features and works to increase their receptive fields, which improves their judgment ability. By enhancing the receptive field of the fine branch features, we can better capture relevant semantic information while minimizing the risk of introducing noise. This strategy allows us to achieve higher accuracy in edge detection tasks and produce sharper, more visually appealing results.
The detailed process for the first feature fusion is described in Algorithm <ref>
In the first step, we aim to balance the multi-scale feature channels and reduce computational costs. To achieve this, we use convolution to decrease the number of channels from the original value to 21 while also maintaining a consistent representation of features across scales. In particular, we set the kernel size of convolution to 1 to effectively reduce parameters and ensure computational efficiency. This strategy has been adopted by other works as well, such as <cit.>.
After reducing the number of channels, we use the Group Normalization (GN) Layer <cit.> to normalize the features. This step is important because it prevents vanishing gradients during subsequent feature fusion, which can substantially hinder network training and result in poor convergence.
Finally, the normalized features are combined through matrix addition from top to bottom. This approach leverages the benefits of the different features at multiple scale levels and results in a more robust representation of the input data. By performing the feature fusion in this way, we can increase the network's accuracy and reduce noise and other undesirable artifacts that may compromise the integrity of the outputs. Overall, this series of steps culminates in a powerful deep learning framework that can effectively tackle a wide range of edge detection tasks.
In fact, the effectiveness of using coarse-scale branches to guide the learning of fine-scale branches has been verified by previous works <cit.>, but the previous works introduce redundant parameters and calculations in the process of feature fusion, while our proposed SEM only retains the structure of this feature fusion. SEM is an extremely lightweight module that requires minimal additional learnable parameters, typically only a few convolution operations with a kernel size of 1. As a result of this efficient design, SEM not only provides a significant boost to model performance but also results in a compact architecture.
§.§.§ Second Feature Fusion
In earlier works <cit.>, the weighted sum of multi-scale edges is directly used to compute the final edge. It is inaccurate that all pixels in the same channel share the same weight and have equal importance in fusion. Recent works <cit.> found this problem and have attempted to assign different weight to each pixel. However, additional operations bring the problem of increasing parameters and decreasing inference speed.
In the second feature fusion stage, we propose a Pseudo Pixel-level Weighting (PPW) module that decomposes the weight of each pixel into spatial and channel components as depicted in Fig. <ref>. To minimize computation cost, we use a 1×1 convolution to directly calculate the product of channel weight and multi-scale edges. The spatial weight is calculated using a spatial weighting module consisting of three 3×3 convolution layers and a softmax activation function, similar to the CoFusion <cit.>. Since the spatial weighting module only accounts for spatial weight, fewer channels are required. Experimental results show that spatial weighting of PPW is able to achieve comparable accuracy with CoFusion using only 25% of the number of channels.
The final prediction of each pixel P_ij can be calculated using the PPW module input X_ijk as follows:
P_ij = PPW(X_ijk)
= ∑_i=1^L(X_ijk× W_ijk)
= ∑_i=1^L(X_ijk× Wc_i × Ws_jk)
Here, L represents the number of multi-scale edges, and Wc and Ws represent channel weight and spatial weight, respectively. X and P denote the input and output of the PPW module.
The PPW module bears some resemblance to CBAM <cit.>, but there are two primary differences.
Firstly, the two modules serve different purposes. The primary objective of PPW is to assign weights to the fusion of multi-scale features to obtain higher quality single-channel images. In contrast, CBAM's primary objective is to selectively emphasize informative features in their channel attention mechanism.
Secondly, there are differences in implementation. Specifically, in PPW, a 1× 1 convolution is used directly to perform channel weighting, and channels are reduced as early as possible in order to decrease module complexity.
§.§ Dynamic focal loss
Edge and non-edge pixels are extremely imbalanced in images, thus Weighted Cross-Entropy loss (WCE) is widely employed in edge detection, which is formulated as
WCE(p_i, y_i)={[ -αlog(p_i) if y_i=1; -βlog(1-p_i) if y_i=0; 0 otherwise ].
where p denotes the final edge prediction and y represents the ground truth. α=| Y_+|/| Y | and β =λ·| Y_-|/| Y |. | Y_+| and | Y_-| are used to represents the number of edge and non-edge, separately. λ controls the weight of positive over negative samples. | Y |=| Y_+| + | Y_-|. Due to the inconsistency of annotations among different annotators, a threshold γ is introduced for loss computation. For pixel i, it will be regarded as edge if the ground truth y_i is more than γ, and we define the ground truth y_i=1. Pixel i will be regarded as no-edge if y_i=0. λ controls the weight of edge over non-edge.
WCE solves the unbalance between positive and negative samples in edge detection by a balance coefficient successfully while ignoring the problem of hard samples. Hard samples are those pixels easily misclassified, which determines the quality of the edge map.
Hard samples are common in dense prediction tasks. In view of this problem, Lin proposed Focal Loss (FL) <cit.>
F L(p_i, y_i)={[ -αωlog (p_i) if y_i=1; -βωlog (1-p_i) otherwise . ].
where
ω={[ (1-p_i)^γ if y_i=1; p_i^γ otherwise. ].
Compared with WCE, FL contains a new weight factor ω, which utilizes the difference between the predicted result and the ground truth to weigh the sample and adjusts the weight flexibly by a hyper-parameter γ. Therefore, the model is guided to pay more attention to hard samples. The effectiveness of FL has been widely demonstrated <cit.>.
However, FL is suffering from confusion in the early stage of model training. Because some modules of the model are randomly initialized, the difference between the prediction and the ground truth at the beginning of training cannot truly reflect the distribution of hard samples. At this point, using FL to focus on fake hard samples is detrimental to the learning of real hard samples. So, we propose Dynamic Focal Loss (DFL) to optimize this situation. DFL is formulated as
DFL(p_i, y_i)={[ -αω' log (p_i) if y_i=1; -βω' log (1-p_i) if y_i=0; 0 otherwise. ].
where
ω'={[ μ+ϵ(1-p_i)^γ/ϵ+μ if y_i=1; μ+ϵ(p_i)^γ/ϵ+μ otherwise. ].
ϵ represents the current epoch and starts from 0. The effect of hyper-parameters γ and μ will be discussed in ablation study.
Our contribution is based on the fact that the output of the model can gradually reflect the correct distribution of the hard samples as the training progresses. Hence, our confidence margin in the prediction should rise gradually.
Here we represent this process as an exponential function ϵ/(ϵ+μ).
The experimental results show that this simple dynamic confidence setting can better define hard samples and guide the model to converge in a more correct direction.
§ EXPERIMENTS
In this section, we first introduce the datasets and implementation details of the experiment, then compare our method with the State-of-the-art methods, and finally verify the effectiveness of each module in our method through ablation experiments and Visual Analysis.
§.§ Experimental datasets
The performance of our proposed CTFN is validated on three benchmark datasets (BSDS500 <cit.>, NYUDv2 <cit.>, and BIPEDv2 <cit.>) and compare with the previous state-of-the-art methods. BSDS500 is the most popular dataset for edge detection, including 200 training images, 100 validation images, and 200 test images, each of which has 4-9 corresponding annotation results. In the experiment, a total of 300 images in the training and validation sets are used to train the model, and then the model is evaluated on the test set. NYUDv2 is an indoor scene semantic segmentation dataset, the edge ground truth is generated from segmentation maps, it containing 1449 groups of carefully annotated RGB and depth images, and each group image has one annotation result. We use 795 images to train the model and evaluate the model on the rest images. BIPEDv2 is recently proposed by Soria <cit.>, which is the second version of <cit.>. This dataset contains 250 carefully annotated high-resolution Barcelona Street View images. There are 200 images for training and validation and 50 images for testing. For a fair comparison, we use the same data augmentation method as RCF <cit.> for BSDS500 and NYUDv2, and the same as DexiNed <cit.> for BIPEDv2.
§.§ Implementation Details.
Our method is implemented on the PyTorch library. All the experiments are conducted on an NVIDIA GeForce 2080Ti GPU with 11GB memory. The backbone of CTFN is initialized on ImageNet and the rest module is randomly initialized. The threshold γ is set to 0.3 for BSDS500. And γ is useless for NYUDv2 and BIPEDv2, due to the ground truth being binary annotations. λ is set to 1.1 for BSDS500 and BIPEDv2, 1.2 for NYUDv2. SGD optimizer is adopted. A standard Non-Maximum Suppression (NMS) is performed to produce the final edge maps before the quantitative evaluation. F-measure is utilized as the quality evaluation standard of the generated edge map:
F_- measure=2^* P^* R/P+R
where P represents the accuracy rate and R represents the recall rate.
Due to local correlation, the edges obtained by deep learning-based methods are actually edge probability maps, even after NMS processing. So we need to choose a threshold to binarize the edges.
There are two options for the threshold used to binarize the edges. One is to find an optimal threshold value for each image (OIS), and another is to use the same optimal threshold for the whole dataset (ODS).
The maximum tolerance allowed for correct matches between edge predictions and ground truth annotations is set to 0.0075 for BSDS500 and BIPEDv2, 0.011 for NYUDv2. More experimental details can be referred to previous work <cit.>.
§.§ Comparison with the State-of-the-art Methods
§.§.§ Performance on BSDS500
We compare CTFN with recent deep learning based edge detection methods on BSDS500, and the results are summarized in Table <ref>. To be fair, all methods are based on VGG16. In terms of accuracy, CTFN, BAN, and FCL are significantly better than other methods. CTFN and BAN have advantages in ODS and OIS respectively and are all slightly better than FCL. In terms of the number of parameters, CTFN, RCF, and CAT are fewer than other methods. P' can reflect this advantage more intuitively. Analyzing parameters except for the VGG16 backbone, CTFN is 1/6 of BDCN, 1/9 of BAN, and 1/18 of FCL. In terms of the amount of calculation, FLOPs of CTFN is only 0.6G more than RCF, ranking second. FLOPs of CTFN is 25% less than BDCN, and nearly half less than FCL or BAN. In short, CTFN's accuracy is comparable to that of the state-of-the-art methods, while the number of parameters and computation cost is far superior to them. Additionally, BAN utilizes several tricks like combined loss function, edge map merging <cit.>, and two-stage training, while our CTFN is trained in an end-to-end manner.
A larger scale accuracy comparison is shown in Table <ref>. CTFN is compared with prior edge detection methods, including both traditional methods and deep learning methods. As shown in Table <ref>, CTFN reports ODS 0.817 on BSDS500, which is around 2% higher than the baseline method RCF. CTFN outperforms most existing HED-based methods and UNet-based methods with a large gap. Precision-Recall curves are presented in Fig. <ref>. As can be seen from the PR curves, when recall rate is less than 0.8, CTFN maintains a significant advantage in accuracy. In addition, we visualized the prediction results of images in BSDS500, and the comparison results are shown in Fig. <ref>. Visually, CTFN is significantly superior to other methods.
§.§.§ Performance on NYUDv2 and BIPEDv2
NYUDv2 has three types of inputs, i.e., RGB, HHA, and RGB-HHA. Following previous works <cit.>, we perform experiments on the three types data. The results of RGB-HHA are obtained by averaging the edges detected on RGB and HHA. Table <ref> shows the comparison of our method with several recent approaches. Our method outperforms the baseline RCF by 0.4% on RGB-HHA. A comparison of edge detection results on NYUDv2 is shown in Fig. <ref>. From left to right are input images, ground truth, and the results of RCF, BDCN, and CTFN. The edges generated by RCF contain many textures that should have belonged to the background. While BDCN, on the other hand, loses many edges. The results of CTFN were significantly better than those from RCF and BDCN.
We also record the evaluation results on BIPEDv2 and the comparison results with other methods are shown in Table <ref>. Similarly, CTFN achieves promising results, with accuracy only 0.1% lower than the best methods. We show some qualitative results in Fig. <ref>. From left to right are the input images, ground truth, and the results of RCF and CTFN.
Model size will not change when tested on different datasets, and the FLOPs ratio of different models is also fixed. Therefore, we only show the accuracy of each method in NYUDv2 and BIPEDv2. As for model size and FLOPs, the results in Table <ref> can be referenced.
§.§ Ablation Study
The main innovations of CTFN are Semantic Enhancement Module (SEM) in the first feature fusion stage, Pixel-Level Weighting (PPW) in the second feature fusion stage, and Dynamic Focal Loss (DFL). In this section, We verify the effectiveness of each module separately, as shown in Table <ref>. The model is trained on BSDS500 train-val set and report the performance on test set.
We first explore the impact of PPW. Experimental results are summarized in Group 1 of Table <ref>. Default represents the most commonly used weighted summation in previous edge detection methods <cit.>. CoFusion means the Context-aware fusion block proposed in the CATS <cit.>. CoFusion-l means CoFusion with same number of channels as PPW. Compared with the weighted summation, both CoFusion and PPW lead to 0.3% higher ODS. While PPW takes 1/4 of CoFusion's parameters because of fewer channels. Since the task is divided into channels and spaces in PPW, each subtask is simpler and requires fewer parameters. When using same number of channels, PPW outperforms Cofusion-l. By comparison, The default Weighted summation uses channel weighting, Cofusion uses mixed channel and space weighting, and PPW uses separate channel and space weighting. The experimental results show that PPW is the best choice.
As shown in Group 2 of Table <ref>, we compare the impact of different loss functions. WCE is the abbreviation of Weighted Cross-Entropy loss and FL is the abbreviation of Focal Loss. FL-pre means using WCE in the first epoch and then using FL to avoid confusion in the early stages of training, which is widely used. DFL is the abbreviation of Dynamic Focal Loss. We can observe that the accuracy of DFL is significantly higher than WCE and FL. And the performance of FL-pre is slightly worse than DFL. In fact, FL-pre is a special case of DFL, in which the hyper-parameter μ→0^+. We can simulate the case by setting μ to 0.1^-9 in Eq. <ref>.
We further test the impact of hyper-parameters γ and μ. The results are summarized in Table. <ref>. In Eq. <ref>, when γ=0, ω' is always equal to 1 irrespective of μ, thus DFL degenerates into Weighted Cross-Entropy. Through a series of simple trials, it can be observed that as the values of γ and μ increase, the accuracy increases first and then decreases. This evaluation is conducted to verify the effectiveness of DFL, thus more exhaustive explorations are not been done, even though they might lead to improvements in accuracy.
We verify the effectiveness of SEM in group 3 of Table <ref>. It can be observed that SEM improves the ODS of the model by 0.5%, which can effectively improve the model performance. The visualization results in Fig. <ref> further validate our conclusion. The effect of SEM is more notorious for fine-scale branches, where there are more textures when SEM is not used.
§.§ Visual Analysis
We visualize the weight ω of Focal Loss in Fig. <ref>, which is mentioned in Eq. <ref>. We can observe that in the early training stage, the output of the model is disordered, leading to similar and chaotic weight ω_1. Hard samples are not paid enough attention to. While the situation is quite different in the stable training stage, the misclassified negative samples near the edge and the error-detected texture in the background correspond to greater weight, which contributes to better edge map generation. By comparing Fig. <ref> and Fig. <ref>, it can be seen that the error of ω in the early training stage is relatively larger, so the confidence margin of ω should be low and gradually increase with the training of the model, which is also the principle of Dynamic Focal Loss.
Another point of note is that the transition from the edge to the non-edge area is softer in the results of CTFN. Due to the imbalance of positive and negative sample weights in Weighted Cross-Entropy Loss, the existing methods suffer from blurry edges <cit.>. As shown in Fig. <ref>, the outputs of these methods are far from single-pixel edges. Even though the edges are processed to a single-pixel width in subsequent Non-Maximum Suppression (NMS), blurry edges lead to localization ambiguity and is detrimental to the final accuracy. While DFL is utilized in CTFN to assign larger weights to the non-edge regions near the edges due to the large difference between the non-edges and the ground truth, which is shown in Fig. <ref>.
We visualize the results of CTFN and a typical WCE-based method BDCN <cit.> in Fig. <ref>. Compared with the ground truth on the left, the results of the two models seem to differ little. However, when we show the details of them in Fig. <ref>, we can observe that the edge of BDCN is thicker, which leads to larger localization ambiguity after NMS, as shown in Fig. <ref>. This contrast can be observed more clearly after concatenating them in the channel dimension, as shown in Fig. <ref>. We can observe that the result of the CTFN (purple) deviate less from the ground truth (cyan) than the result of BDCN (yellow).
Therefore, we can conclude that CTFN can effectively alleviate the problem of edge localization ambiguity.
§ CONCLUSION
In this paper, we review existing deep-learning based edge detection methods and propose a new Compact Twice Fusion Network(CTFN), in which we divide the edge detection model into three parts: the backbone, the first feature fusion stage, and the second feature fusion stage. We propose two lightweight modules SEM and PPW to fuse multi-scale features and further introduce a dynamic focal loss to focus on the hard samples of images. Experimental results on multiple datasets verify the effectiveness of CTFN. Compared to state-of-the-art methods, CTFN achieves competitive accuracy and higher efficiency.
Limitation. For fair comparison, our method still uses VGG-16 as the backbone, which accounts for more than 95% of the model parameters. This limits the further compression of the model. And the feature extraction ability of VGG-16 is hard to meet the needs of edge detection. In addition, compared with WCE, DFL adds two hyperparameters, which should be different on different datasets. This makes it more difficult for our method to transfer on different datasets. Therefore, in the future work, we will explore more efficient backbone and design a dynamic Focal Loss with adaptive hyperparameters.
*Acknowledgments
This work is partly supported by National key r&d program(Grant no. 2019YFF0301800),National Natural Science Foundation of China (Grant no. 61379106),the Shandong Provincial Natural Science Foundation (Grant nos.ZR2013FM036,ZR2015FM011). Xavier Soria was funded by the Air Force Office of Scientific Research under Award FA9550-22-1-0261.
§ DECLARATIONS
* Competing interests
No potential conflicts of interest were identified.
* Availability of data and materials
This work used publicly available dataset for the training and validation. The code source is available.
|
http://arxiv.org/abs/2307.04129v1 | 20230709085847 | Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers | [
"Zhiyu Zhu",
"Junhui Hou",
"Dapeng Oliver Wu"
] | cs.CV | [
"cs.CV"
] |
Cross-modal Orthogonal High-rank Augmentation
for RGB-Event Transformer-trackers
Zhiyu Zhu, Junhui Hou, and Dapeng Oliver Wu
Department of Computer Science, City University of Hong Kong
[email protected]; [email protected]; [email protected]
August 12, 2023
==================================================================================================================================================================================================
empty
This paper addresses the problem of cross-modal object tracking from RGB videos and event data. Rather than constructing a complex cross-modal
fusion network, we explore the great potential of a pre-trained vision Transformer (ViT). Particularly, we delicately investigate plug-and-play training augmentations that encourage the ViT to bridge the vast distribution gap between the two modalities, enabling comprehensive cross-modal information interaction and thus enhancing its ability.
Specifically, we propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively.
To mitigate network oscillations resulting from the masking strategy and further amplify its positive effect, we then theoretically propose an orthogonal high-rank loss to regularize the attention matrix.
Extensive experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and two-stream trackers to a large extent in terms of both tracking precision and success rate. Our new perspective and findings will potentially bring insights to the field of leveraging powerful pre-trained ViTs to model cross-modal data. The code will be publicly available.
§ INTRODUCTION
Event cameras asynchronously capture pixel intensity fluctuations with an ultra-high temporal resolution, low latency, and high dynamic range, making it gain increasing attention recently <cit.>. Owing to such admirable advantages, event cameras have been widely adopted in various applications, such as object detection <cit.> and depth/optical flow estimation <cit.>. Particularly, the distinctive sensing mechanism makes event cameras to be a promising choice for object tracking <cit.>.
Despite many advantages of event-based object tracking under special environments, e.g., low-light, high-speed motion, and over-exposed, event data lack
sufficient visual cues, such as color, texture, and complete contextual appearance that can be easily captured by RGB data,
resulting in only event-based vision still suffering from relatively inferior performance in practice. Thus, a more promising direction is to investigate cross-modal object tracking from both RGB and event data, where the merits of the two modalities can be well leveraged for pursuing higher performance.
However, the vast distribution gap between RGB and event data poses significant challenges in designing algorithms for modeling cross-modal information.
Most existing pioneering cross-modal trackers heavily engage in robust cross-modal fusion modules, which is cumbersome to use advanced embedding backbones for boosting performance.
In view of the success of Transformer-based tracking algorithms <cit.>, where the multi-head attention naturally models the indispensable correlation relationship between template and search regions, we plan to investigate the potential of pre-trained powerful vision Transformers (ViTs) in cross-modal object tracking from both RGB and event data.
However, those pre-trained Transformers with RGB data may not be able to fully model the essential feature interaction across RGB and event data, due to the distribution gap between the two modalities.
To this end, we study plug-and-play training techniques for augmenting the pre-trained Transformer used as the embedding backbone of our RGB-event object tracking framework.
To be specific, to promote the learning of the attention layer across two modalities, we propose a cross-modal mask modeling strategy, which randomly masks/pops out the multi-modal tokens. We anticipate that, in reaction to the absence of a particular modality at certain locations, the network would proactively enhance interactions on the remaining cross-modal tokens. Nevertheless, randomly masking tokens will inevitably alter data distributions and introduce disruptions, impeding network training. To mitigate the induced negative effect, we further propose a regularization term to guide the training of each attention layer. Based on the observation that the values of internal attention matrices of a Transformer indicate the degree of cross-modal feature interaction,
we propose to orthogonalize the attention matrix to promote its rank obligatorily. Beyond, we anticipate that such regularization could encourage the cross-modal correlation to be evenly and concisely established using the multi-domain signatures, rather than unduly reliant on a specific domain. Finally, we apply the proposed techniques to state-of-the-art one-stream and two-stream Transformer-based tracking frameworks and experimentally demonstrate that their tracking performance is further boosted significantly.
In summary, the contributions of this paper are:
* a mask modeling strategy for encouraging the interaction between the cross-modal tokens in a proactive manner;
* theoretical orthogonal high-rank regularization
for suppressing network fluctuations induced by cross-modal masking while amplifying its positive effect;
and
* new state-of-the-art baselines for RGB-event object tracking.
Last but not least, our novel perspectives will potentially bring insights to the field of leveraging pre-trained powerful ViTs to process and analyze cross-modal data.
§ RELATED WORK
§.§ Object Tracking
Recent years have seen remarkable progress in the study of object tracking, which is primarily due to the widespread success of deep learning <cit.>. Based on the distribution of computational burdens, current methods could be generally divided into two-stream <cit.> and one-stream methods <cit.>. As the earlier invented and relatively mature ones, most offline Siamese-based tracking methods <cit.> fall into the first category. It utilizes a delicate embedding backbone to extract semantic-rich embeddings and then models the target location via either a direct proposal head <cit.> or an online optimization process <cit.>, which is also called deep Siamese-trackers or discriminative correlation filters, respectively <cit.>. SiamFC <cit.> first developed a fully-convolutional architecture to fuse template and search embeddings for object tracking. Though introducing a single-stage RPN <cit.> detector SiamRPN <cit.> achieved target object tracking by comparing the current-frame features to those from a template. To remove the disturbance factors, e.g., padding, SiamRPN++ <cit.> introduced a spatial-aware sampling strategy and further utilized ResNet <cit.> to embed representative features for Siamese-based tracking.
DiMP <cit.> proposed to exploit both target and background appearances to achieve object tracking. KYS <cit.> represented the scene information as dense state vectors and utilizes such state vectors to maximize the tracking performance. Besides, some spatio-temporal-based methods also exploit temporal information to achieve robust and effective tracking <cit.>. MDNet <cit.> separated domain-independent from domain-specific information via a CNN-based framework. RT-MDNet <cit.> further improved it via an RoI-Align strategy, which extracts more precise embeddings from feature maps of targets and candidates. Swin-tracker <cit.> introduced the Swin-Transformer <cit.> to effectively encode the semantic information from input images for high-performance visual tracking.
Due to the extraordinary correlation modeling ability of Transformer, an emerging branch of one-stream methods shows strong potential in correlation modeling. OS-track <cit.> unified the embedding and relation modeling processes with a single vanilla ViT <cit.>, which achieves admiring performance with reduced computational resources. Meanwhile, SimViT-Track <cit.> proposed a similar approach, which feeds search and template image tokens straight into a ViT backbone and performs regression and classification on the resulting tokens.
In summary, with the success of existing embedding backbones, such as ViT <cit.> and Swin-Transformer <cit.>, more intriguing and effective methods have been proposed recently. While these methods could achieve admirable performance, most of them are driven by matching semantically identical segments of the search and template regions viewed as RGB images. As a result, their performance is inextricably tied to imaging characteristics, which can be compromised in specific scenarios such as high-speed and low-light scenes. Hence, it is highly desired to incorporate multi-modal inputs to remedy each deficiency. Moreover, the crucially multi-modal data necessitates additional efforts to generalize these methods to the event-based.
§.§ Event-based Tracking
Owing to its innate characteristics and superiority for object tracking, event-based tracking has been a progressively prevalent subject for research in recent years. Additionally, existing approaches may be broadly classified into two categories: model-based and data-driven. Through describing surrounding environments by a photometric 3D map, Bryner et al. <cit.> proposed to track the 6-DOF pose of a camera. To capture the spatio-temporal geometry of event data, Mitrokhin et al. <cit.> utilized a parametric model to compensate camera motion. Based on a pipeline of tracking-learning-detection, Ramesh et al. <cit.> proposed an object tracking algorithm for event cameras, which is the first learning-based long-term event tracker. Then, Li et al. <cit.> introduced the VGG-Net-16 to encode the appearance of the event-stream object. Inspired by the classic Siamese-matching paradigm, Chae et al. <cit.> presented to track objects via learning an edge-aware similarity in the event domain. Recently, Zhang et al. <cit.>, introduced a spiking transformer for encoding spatio-temporal information of object tracking. Moreover, ZHU et al. <cit.> proposed to utilize inherent motion information of event data to achieve effective object tracking. To summarize, although there are some promising studies that provide directive insights for event-based tracking, a limited number of works have sought to find complementary information from RGB data, e.g., semantic information.
§.§ Cross-modal Learning
Fusing embedding with multiple modalities is a sensible solution for perceiving and recognizing the objects robustly and accurately <cit.>. However, for current machine learning algorithms, learning representative patterns from multiple modalities is still a challenging issue <cit.>. Wang et al. <cit.> proposed to apply data augmentation techniques to boost cross-modal 3D object detection. Liu et al. <cit.> utilized cross-modal feature rectification and fusion models for image segmentation with input from multiple modalities. Jaritz et al. <cit.> solved the multi-modal segmentation issue from the perspective of unsupervised domain adaptation. Moreover, Wang et al. <cit.> designed an RGB-T tracking framework by propagating the intermodal pattern and long-term context. Ye et al. <cit.> proposed a cross-modal self-attention module to achieve natural language-based image segmentation via adaptively capturing informative words and important regions in images. Zeng et al. <cit.> proposed to project the camera features onto the point set on LiDAR. In summary, recent works are clearly founded on network architecture, as is evident by their prevalence. Moreover, the current advanced Transformer paradigm could adaptively process different modalities. However, there is still a lack of further investigations and analysis of the internal mechanism.
§ PROPOSED METHOD
§.§ Motivation
Learning the correlation between the template and search regions robustly and precisely is one of the most essential aspects of object tracking. Fortunately, with current advancements in the multi-head attention mechanism, such correlation
could be naturally achieved via Transformer-based frameworks <cit.>.
However, current powerful ViTs were usually pre-trained with RGB data, e.g., ImageNet <cit.>,
potentially resulting in that they cannot be adequately adapted to cross-modal learning, i.e., the full feature interaction between RGB and event data cannot be well achieved, which is essential for cross-modal object tracking, due to the vast distribution gap between RGB and event data. Accordingly, the tracking performance may be limited.
Instead of following existing cross-modal research paradigms mainly focused on designing sophisticated cross-modal information fusion networks, we aim to explore plug-and-play training augmentation techniques to mitigate the above-mentioned potential limitation of a pre-trained ViT used as the embedding backbone of an RGB-Event object tracking scheme.
Generally, based on a fundamental and essential premise that different modalities possess their own unique benefits for a cross-modal tracker, token embedding information should be adequately transmitted across multi-modalities, especially for the regions with target objects, in a bid to enhance themselves using specific merits from the other modality. Thus, we propose a mask modeling strategy to enable the network to proactively exploit the cross-modal information in Sec. <ref>. Furthermore, we propose a high-rank orthogonalization mechanism in Sec. <ref>, which can not only alleviate network fluctuations induced by the mask modeling strategy but also further boost cross-modal information interaction.
In what follows, we will detail the proposed techniques adapted to both one-stream and two-stream trackers, as illustrated in Fig. <ref> (b) and Fig. <ref> (c), respectively.
We always use I and E in the subscripts to indicate the RGB and event modalities, and T and S are the tokens of template and search regions, respectively.
§.§ Mask-driven Cross-modal Interaction
Grouping tokens via similarity is one of the most representative steps for the self-attention mechanism of a Transformer <cit.>. However, due to the distribution gap between
tokens corresponding to different modalities, the similarity-driven attention may tend to aggregate information from the identical modality, hence impeding the cross-modal learning,
Thus, how to effectively and efficiently promote the cross-modal interactions is critical for maximizing the potential of a pre-trained ViT for RGB-event object tracking.
We propose a cross-modal mask modeling strategy to address this issue in a proactive manner, shown as Fig. <ref> (a).
As illustrated in Fig. <ref>, the underlying intuition of this strategy
is through removing the patches of different modalities and locations, we expect that the task loss would enforce the network to spontaneously enhance/build cross-modal correlation, due to the remaining tokens in different modalities. Once the interaction is established, the RGB and event tokens may learn to shrink the distribution gap, maintaining such correlation to the inference phase.
Specifically, we apply random masks to RGB and event data to remove distinct patches.
To begin, for the one-stream methods, masking elements can be readily accomplished by simply popping out corresponding elements, which could concurrently lessen the network training burden.
For the two-stream methods, due to the large computational resource consumption of the embedding backbone, we directly average the masked features of RGB and event data at the primary stage, which are
further fed into the high-level embedding backbone and relation modeling modules for the object proposal.
Remark. It is worth noting that the motivation and objective of the proposed masking strategy are considerably different from those of the well-known masked image modeling <cit.>. We start from the pursuit of promoting the network to actively utilize cross-modal information. Thus, the patches with distinct positions across RGB and event modalities are randomly removed to permit each location can be perceived by the network but with different modalities. However, mask image modeling pre-trains network weights to comprehend image semantics by feeding just a subset of image patches to reconstruct the unseen area.
Although such a masking strategy used in the training phase is expected to strengthen the ability of the network to perceive cross-modal information to some extent, the randomly dropped information would potentially result in an unstable training process. Moreover, such disruptions are especially devastating for one-stream algorithms, which must concurrently learn representative embeddings and establish the relationship between the cross-modal template and search tokens (see the experimental demonstration in Sec. <ref>). Thus,
to pull the network out of this predicament, we further propose orthogonal high-rank regularization in a theoretical manner in the next section.
§.§ Orthogonal High-rank Regularization
To appreciate the multi-head attention mechanism, we take a one-stream tracker <cit.> with the vanilla ViT <cit.> as an example. As illustrated in Fig. <ref> b), its internal self-attention layers concurrently perceive the RGB and event tokens from both the template and search areas. Depending on the query and key belongings k ∈ℝ, we can partition the resulting attention matrix into k^2 blocks (Here k=4). Note that the attention values of a typical block reflect the degree of the interaction between tokens.
To
mitigate network disturbs induced by the cross-modal mask modeling strategy
and further amplify its positive effect (i.e., boosting cross-modal learning), we concentrate on the cross-modal zones of the attention matrix, such as M_S_I,S_E, and M_S_E,S_I. Assuming that if tokens are well-embedded and with highly discriminative features, each token will form a unique correlation with its identical counterpart, resulting in each row or column being orthogonal to the others. Moreover, as attention elements are non-negative, the corresponding matrix should be full rank[We refer readers to the Supplementary Material for more details]. Therefore, we propose the following regularization to encourage some desired blocks of the attention matrix to be high-rank:
L(M,τ) = (Σ)-(τ)_1, M = U Σ V,
where τ∈ℝ is a pre-defined threshold value, U∈ℝ^n× n, Σ∈ℝ^n× m, and V∈ℝ^m× m are the outputs of the singular value decomposition (SVD) of block M∈ℝ^n× m, and (·) returns a vector, consisting of the main diagonal elements of the input matrix, and (·) converts an input scalar to be a vector by duplicating the scalar. We impose the regularization term onto a set of blocks of the attention matrix {M^(i )}_i=1^N standing for the interaction of cross-modal tokens.
Due to its strong regularization effect, we empirically select the blocks corresponding to image-to-event attention (i.e.M_S_I,T_E, and M_S_I,S_E), and the blocks to event-to-image attention (i.e., M_S_E, T_I, and M_S_E, S_I).
Moreover, as computing the SVD of a matrix is time-consuming, we randomly choose a layer to implement this regularization at each optimization step, instead of operating it in each layer.
For the two-stream methods, since the input data from different modalities are mixed in a preceding embedding backbone as shown in Fig. <ref> (c), e.g., swin-Transformer <cit.>, the resulting attention matrix only consists of two parts, i.e., the search-to-template and template-to-search regions, as illustrated in Fig. <ref> (c).
Under this scenario, we anticipate that the discriminative cross-modal tokens will be able to form a unique correlation with the identical object parts across template and search areas. As shown in the right part of Fig. <ref> (a) and Fig. <ref> (c), such a relationship would also produce that each row is orthogonal to the others.
Thus, we also regularize the regions belonging to the target objects in M_S,T. Specifically, guided by bounding box information, we first mask the attention weights in non-target regions of M_S,T, then apply Eq. (<ref>) to increase the rank of the masked matrix.
§.§ Training
To train a Transformer-based tracker with the proposed plug-and-play augmentation techniques, at each optimization step, we first randomly mask/pop out event and image patches with a ratio of δ_e and δ_i (0<δ<1), respectively. Then, we train the whole network with the following loss function:
L_all = L_task + α L(M,τ),
where L_task denotes the original task loss function, composed of regression and classification branches, and α is a balanced weight for the proposed regularization term.
§ EXPERIMENT
Implementation details. We evaluated the proposed plug-and-play training augmentation techniques on both one-stream and two-stream trackers. We set template and search sizes as 128 and 256, respectively, which contain 2× and 4× regions than annotations. Moreover, the location and scale jitter factors of the search region are set as 3 and 0.25, respectively (No jitter to template region). For one-stream, we directly adopted the SOTA method named color-event unified tracking (CEUTrack) <cit.> as our baseline model (ViT-B). During training, we used the same optimizer (ADAW), learning rate scheduler, and task loss function as the original paper. We set the batch size as 24 and the augmentation weight α in Eq. (<ref>) empirically as 1.2. The masking ratios of both modalities δ_i and δ_e were set to 0.1.
For two-stream, to the best of our knowledge, there is no Transformer-based RGB-event tracker available,
we chose the most recent event cloud-based motion-aware tracker (MonTrack) <cit.> and modified it with the proposal head of a Transformer-tracker <cit.> and the backbone of pre-trained swin-v2 <cit.> to construct two-stream RGB-event trackers for the detailed architecture). Moreover, we tested lightweight and heavy backbones, i.e., Swin-V2-Tiny <cit.> and Swin-V2-Base <cit.>, to achieve comprehensive evaluation, and the resulting baselines are named MonTrack-T and MonTrack-B, respectively. To train the whole framework, we utilized the AdamW optimizer <cit.> with the learning rate of 1e^-4for the proposal head and 1e^-5 for the backbone. We set the weight decay as 1e^-4. MonTrack-T and MonTrack-B were trained with 57K and 81K steps, respectively. We empirically set the value of α as 1.0, and the masking ratios of RGB and event data δ_i and δ_eas 0.4 and 0.3, respectively.
We refer readers to the Supplementary Material for the detailed network architectures and settings.
Datasets. We employed two large-scale cross-modal RGB-event single object tracking datasets: FE108<cit.> and COESOT<cit.>. Both datasets were collected by DAVIS346 with a spatial resolution of 346 × 260, dynamic range of 120 dB, and minimum latency of 20 μ s. FE108 consists of 108 RGB-event sequences collected indoors with a total length of 1.5 hours, which captures 21 different types of objects. The training split of FE108 consists of 140K RGB-Event pairs and 59K for testing. The ground-truth bounding boxes were annotated by a Vicon motion capture system. Moreover, the COESOT dataset consists of 578,721 RGB-Event pairs, which could be split into 827 and 527 sequences for training and testing, respectively. Those sequences are collected from both indoor and outdoor scenarios and cover a range of 90 classes and 17 attributes. The ground truth bounding boxes of the COESOT dataset were manually annotated. Note that we adopted the quantitative metrics suggested by each dataset to evaluate different methods.
§.§ Experimental Results
Results on FE108. As listed in Table <ref>, after being augmented by the proposed techniques during training, both MonTrack-T and MonTrack-B substantially improve both RSR and PRP by more than 3%. Moreover, the larger model “MonTrack-B" yields a greater performance gain. We reason such an effect may be the consequence of promoting thoroughly cross-modal interaction Besides, the superior performance of the proposed techniques is also demonstrated in the precision and success plots in Fig. <ref>, which exceeds SOTA methods by a large extent, i.e., 5.1% in RSR, 8.1% in OP_0.50, 12.1% in OP_0.75, and 3.8% in RPR. Additionally, the higher performance of cross-modal methods
than that of only event-based methods and only RGB-based methods
demonstrates the significance and necessity of using the information of both RGB and event data for object tracking.
Results on COESOT. As shown in Table <ref>, the original Tansformer-based cross-modal tracker, i.e., CEUTrack, improves the SR value of the previous SOAT SiamR-CNN by 1.1%. After being augmented with our techniques, i.e., CEUTrack+Ours, the values of SR and PR are further improved by 1.2% and 1.4%, respectively, and its NPR achieves higher than 70%,
convincingly validating the effectiveness of the proposed techniques. In addition, we also provide the success and precision plots of different attributes in Fig. <ref>, where it can be seen that the proposed augmentations can yield general improvements instead of only strengthening certain circumstances. For example,
the proposed augmentations achieve 3.4 % precision and 2.8 % success improvements under the blurring attribute. Especially, CEUTrack+Ours maintains the best performance under the camera motion attribute, while the baseline CEUTrack drops to the 7^th.
We also refer readers to the Supplementary Material for the comparisons of the network size and inference time.
§.§ Ablation Study
Visualizations. Fig. <ref> visualizes the internal attention matrix of CEUTrack. The values of each row of the matrix are utilized to weight-sum tokens in that row and project to a corresponding token. Due to the absence of values in the blocks M_S_I,S_E, M_S_I,T_E, M_T_I,T_E, M_T_I,S_E in Figs. <ref> (a) and (d), there is scarce information projected from the event domain to the RGB domain.
The reason may be that the ViT was pre-trained on ImageNet composed of RGB data,
making it preferable to process RGB data.
When used as the backbone for constructing RGB-event object tracking, the pre-trained filters attempt to project event information onto RGB tokens to complete the labor-intensive tasks of information fusion and processing, instead of the inverse projection. After being augmented with our techniques during training, the cross-modal interaction is
noticeably enhanced, i.e., the matrix blocks, which are zeros in Figs. <ref> (a) and (d), exhibit attention values, as demonstrated in Figs. <ref> (b) and (e).
Besides, we also visualized the singular values of matrix blocks related to the cross-modal interaction in Figs. <ref> (c) and (f), which substantially validates they have been pushed far away from a low-rank matrix after applying the proposed techniques. We refer readers to the Supplementary Material for more results.
Finally, Fig. <ref> shows the queries of the 2^nd, 4^th, and 7^th self-attention layers where it can be seen that the proposed augmentations narrow the distribution gaps between event and RGB tokens, especially for the 4^th layer.
Masking vs. High-rank. We conducted throughout experiments to better understand the relationship and function of the proposed two augmentation techniques. From Table <ref>, it can be seen that
when the two techniques were simultaneously applied, the improvement is much more significant than that of only applying the masking scheme. The improvement is slight when only the high-rank regularization was applied. These observations validate our claim that the two techniques are complementary.
Effect of the mask size. We experimentally validated the effect of different mask sizes on performance. As shown in Table <ref>, the benefits may be nullified under extremely large or tiny masks. The possible reason is that the network experiences the small masks as noise. While, if the mask is too broad, the object may only appear in one modal, which may be detrimental to cross-modal learning.
§.§ Discussion
In view of the impressive performance of the proposed plug-and-play training augmentations, it is worth further exploring their potential in other cross-modal scenarios, such as RGB-3D point clouds, or even vision-natural language. In addition, as demonstrated in Fig. <ref>, the proposed orthogonal high-rank regularization indeed facilitates the interactions between cross-modal tokens, and thus, it would be promising to further develop task-specific regularization terms for other visual Transformers-based works.
§ CONCLUSION
In this paper, we introduced plug-and-play training augmentations for Transformer-based RGB-event object tracking. Our augmentations consist of two complementary techniques–cross-modal mask modeling and orthogonal high-rank regularization with the same objective of enhancing the cross-modal interaction of a ViT pre-trained only with RGB data.
Our extensive experiments demonstrate the effectiveness of our training augmentations, as state-of-the-art methods achieve significant improvement in tracking performance after augmentation.
While current Transformers can be scaled up to enormous sizes, relying solely on final objectives to guide the model learning process may be insufficient. We hope our perspectives, findings and analysis
will inspire further research into the internal mechanisms of Transformer-based cross-modal fusion tasks.
ieee_fullname
|
http://arxiv.org/abs/2307.05628v2 | 20230711063043 | DNAGPT: A Generalized Pre-trained Tool for Versatile DNA Sequence Analysis Tasks | [
"Daoan Zhang",
"Weitong Zhang",
"Bing He",
"Yu Zhao",
"Jianguo Zhang",
"Chenchen Qin",
"Jianhua Yao"
] | q-bio.GN | [
"q-bio.GN",
"cs.LG"
] |
Article Title]DNAGPT: A Generalized Pre-trained Tool for Versatile DNA Sequence Analysis Tasks
1,2,4]Daoan [email protected]
2,3]Weitong [email protected]
2]Bing [email protected]
2]Yu [email protected]
[1]Jianguo [email protected]
[2]Chenchen [email protected]
[2]Jianhua [email protected]
[1]Southern University of Science and Technology
[2]Tencent AI Lab, Shenzhen, China
[3]City University of Hong Kong
[4]University of Rochester
GPT has been proven to be capable of extracting general information from language sequences, thereby benefiting all downstream tasks. This motivates us to use pre-trained models to explore the hidden inherent information in DNA sequences. However, data and task requirements in DNA sequence analyses are tasked in different formats such as generation, prediction and regression, and are complexity and involve different modalities, such as nucleotides sequences and, expression levels, etc. Existing BERT-based models are mostly for generation tasks and use sequence data as input and output, thus cannot easily handle various DNA analysis tasks in one single model. Herein, we propose a generalized DNA pre-training DNA model, DNAGPT, that was trained on over 200 billion base pairs from all the mammals. We enhance the classic GPT model by adding binary classification task (DNA sequence order) and numerical regression task (guanine-cytosine content prediction) in the pre-training period and enhancing the architecture with corresponding embedding layers and encoding heads. We also design a comprehensive token language to encode sequence, number and task related information in the same token space. Therefore, DNAGPT can handle versatile DNA analysis tasks and simultaneously process handle both sequence and numerical data. We have evaluated our model on genomic signals and regions recognition, pseudo genomes generation and mRNA abudance regression tasks. We demonstrate that benefiting from pre-training, DNAGPT can shows superior performance than the existing models specially designed for various downstreams tasks.
[
*
August 12, 2023
===================
§ INTRODUCTION
In biological research, the collection, organization, and study of biological data have never ceased <cit.>. As the most fundamental information in biology, gene sequences<cit.> contain rich biological information, especially those with large non-coding regions <cit.> that remain unexplored and are particularly worth investigating. The diversity, large volume, and complexity of relationships in biological information make the data analysis and understanding of the data difficult.
One single gene in the more than 20,000 genes in the human genome <cit.> can be characterized from different aspects: it can be represented by nucleotide sequences <cit.>, its expression level in different cells may vary greatly due to the influence of factors such as its non-coding region, cell type, or environment <cit.>, moreover, it can be translated into proteins with different abundance levels under different circumstances <cit.>. Therefore, in DNA sequence researches, sequencing data represented by DNA sequences and expression data represented by numerical values of abundance are inseparable and indispensable. To this end, in this study, we aim to construct a multitask model that entangles both types of data seamlessly, therefore can be generalized to versatile DNA analysis tasks.
Recently, the emergence of foundation models <cit.> has revolutionized natural language understanding <cit.> by pre-training generalized models on large-scale datasets that can be tuned to accommodate various downstream tasks. However, as mentioned above, DNA analysis tasks have various forms that involve both sequence and numerical data as input and output <cit.> which are difficult to tackle in one language based model <cit.>. The previous attempts, DNABERT <cit.> as well as Nucleotide Transformers (NT) <cit.>, involved pre-training on the genome data followed by fine-tuning on the downstream datasets based on task-specific heads, separately handling attribute prediction tasks like regions recognition tasks (GSR) <cit.> and generation tasks like reconstructing human genetic
variants <cit.>. In addition, during pre-training, the previous metioned pre-trained models only used DNA sequences and did not consider numerical data, making it unsuitable for tasks that involve numerical input or output such as the regression of mRNA abundance from the DNA sequence <cit.>. These weaknesses severely limits the generalization on various tasks and fails to propose a generalized model that seamlessly integrates DNA sequence relevant tasks. Also, unifying those intricate and diverse data types and task paradigms can reduce unnecessary algorithm design effort while allowing more tasks to benefit from pre-training, further paving the way for more profound discoveries and insights in DNA sequence analysis.
Constructing such a general pre-trained model requires consideration from two aspects: (1) How to coherently process different data types (sequence and number) in both pre-training and testing stage? (2) How to establish a common pipeline for different tasks?
In this paper, we introduce DNAGPT, a generalized pretrained model for DNA analysis, where a multi-task pre-training strategy and a novel token language are proposed to answer the above two questions. In addition to the auto-regression pre-training task in the classic GPT model, we add a binary classification pre-training task (DNA sequence order) and a numerical regression pre-training task (guanine-cytosine content prediction) in the pre-training period to help the model to better understand DNA sequence data and numerical data. For the DNA sequence order prediction, we randomly flip the input DNA sequence and let the model predict whether the flip operation has been performed or not. For the guanine-cytosine (GC) content prediction, we randomly extract a segment of the sequence from the input, and then have the model calculate and output the GC content value for this segment. We modify the GPT architecture with corresponding embedding layers and encoding heads for both sequence and numerical input and outputs so that they can be processed and trained in the same framework. We also design a comprehensive token language to encode sequence, number and task related information in the same token space.
Furthermore, in order to better learn the sequence conservation and diversity across species, we utilize reference genomes <cit.> from all the mammals for pre-training, with a total data size exceeding 200 billion base pairs (bps).
After pre-training, we tested and evaluated the functionalities, capabilities and performance of the DNAGPT on a diverse panel of prediction, regression, and generation tasks. We begin from GSR prediction task <cit.> to assess the sensitivity of the model to specific sites. The results demonstrated that the DNAGPT can not only compete with state-of-the-art methods but also accurately identify pivotal regions within the input sequence. After that, DNAGPT achieve better results compared with conventional methods on mRNA abundance assessment task <cit.> with a mixture input of tensors and DNA sequences and output the corresponding mRNA abundance values. We further examine whether DNAGPT can produce pseudo DNA sequences <cit.>, the results from various metrics proved that the DNAGPT surpasses traditional GAN and RBM models in terms of maintaining certain biological properties and features discovered in natural genomic sequences.
§ DNAGPT ARCHITECTURE
§.§ Model structure
The backbone of DNAGPT is a transformer-based <cit.> auto-regressive <cit.> decoder with the masked self-attention <cit.> module. To better deal with numerical information, we pretrain the DNA sequence and numerical property end to end in a single model. Detailed network structure is presented in Figure. <ref> c. DNAGPT uses sequence tokens to denote the encoded DNA sequence and number tokens for the encoded numbers. The sampled DNA sequence is first processed into a string of non-overlapped k-mers token input, then sent into the Sequential Embedding Layer to be encoded as embeddings. The numbers are sent directly into a Numerical Embedding Layer to be encoded as embeddings co-training with the DNA embeddings. Then we concatenate both embeddings and send them into the GPT. The outputs of the GPT are split into two types of embeddings and sent to the Classification Head to classify different tokens and Regression Head to generate numbers. The structure of those heads is presented in Figure. <ref> d. It’s worth noting that DNAGPT can handle versatile downstream applications, where only fine-tuning of the original model parameters is needed. This simplifies the model’s usage, preserves its generalizability, and lays the foundation for potential zero-shot learning.
§.§ Design of token language
Currently, most DNA pre-training methods <cit.> simply use strategies from natural languages and do not consider the characteristics of DNA sequence and specific biological tasks in the model design.
DNA sequence has no organizational structure as the nature language, which can be hierarchically divided into paragraphs, sentences, words and punctuations.
We design a hierarchical token language structure for DNA sequences. Non-overlapped k-mers based on bps (base pairs) are first used to generate DNA words. DNA words of variable lengths are then combined to form DNA sentences. DNA sentences of variable lengths are then integrated to form DNA paragraphs, which are input into the GPT model.
As shown in Figure. <ref> a, the regular input and output tokens are Sequence tokens and Number tokens which represent the DNA sequences and numbers respectively. Instruction tokens are used to give a prompt to the model about what are the next sequence of the tokens should the model output. Take an example, 'Human”AATAAA' indicates we encode a human AATAAA polyadenylation signals and 'Bovine”AATAAA' indicates we encode a bovine AATAAA polyadenylation signals. Similarly, 'M”0.3155' indicates that we encode a number into the model and in 'B”X', 'B' is the instruction token of the binary classification where the Classification tokens 'A' indicates 'True' and 'N' indicates 'False'. Furthermore, to better construct connections, we use Connection tokens to form the connections of two series of tokens, where '+' represent the aggregation of two series of tokens and '=' represent a relation of input and output. Specifically, when we want to predict the expression level of mRNA from both DNA sequence and the mRNA half-life values, we can encode the inputs as 'Human”ATCGTC”+”M”-0.3484”=”M”0.9854'. This input indicates that we hope the model can generate the information from both of the 'ATCGTC' sequence and the input number '-0.3484' to output the result numbers '0.9854'. The reserved tokens include numbers from '0' to '9', some unused uppercase letters like 'K', 'L', etc. and some special symbols like '*' and '/', etc. These reserved tokens can be used to build up more exclusive tasks for DNA sequence analysis. The complete token list is presented in the Figure. <ref>.
§ MULTI-TASKS PRE-TRAINING
In order to integrate DNA sequence information from multiple species and allow downstream tasks to benefit from cross-species information, we proposed three variations of DNAGPT, named DNAGPT-H, DNAGPT-M, and DNAGPT-S-512. All DNAGPT models have 0.1 billion parameters, with differences only in sequence length and pre-training data. Specifically, DNAGPT-H's sequence length is set to 4096, equivalent to 24,576 bps, and its pre-training data is based on Human reference genomes; DNAGPT-M also has a sequence length of 4096, with pre-training data from reference genomes of 9 species; DNAGPT-S-512 has a sequence length set to 512 and its pre-training data consists of reference genomes from all mammals. Specifically, the dataset for Genomes from 9 species includes reference genomes from Arabidopsis_thaliana, Caenorhabditis_elegans, Bos_taurus, Danio_rerio, Drosophila_melanogaster, Escherichia_coli_gca_001721525, Homo_sapiens, Mus_musculus, Saccharomyces_cerevisiae with a total of 10 billion bps. For the mammals dataset, we downloaded all mammalian reference genomes from the NCBI GenBank. After preprocessing, approximately 200 billion bps of data were sampled for pre-training. We then compare the three versions of DNAGPT in the ablation study and provide a detailed description of the data used in the supplementary materials. Reported results in different tasks are from the suitable version of DNAGPT for each task due to the limitation of task-specific sequence length. In the GSR classification task, we used all the three versions of DNAGPT. For the mRNA prediction and pseudo genomes generation tasks, the input sequence length requirements are greater than 512. Therefore, we utilize DNAGPTs with an input sequence length of 4096.
§.§ Pre-training tasks
We design three pre-training tasks for DNAGPT to fully characterize the DNA sequence and its associated numerical properties, including one standard GPT task and two DNA specific tasks.
Next token prediction
Next token prediction <cit.> is a classical pre-training task in NLP. GPT leverages this technique which can predict the next possible token based on the previous tokens. Recently, by adding more parameters and more training data, GPT-3 and GPT-4 demonstrate remarkable performance on various tasks. In DNAGPT, we also use the next token prediction strategy as the fundamental pre-training task.
Guanine-cytosine content prediction
Guanine-cytosine (GC) content plays a crucial role in transcriptome analysis as it provides essential information about genome structure, such as structural variations <cit.> and transcriptional activity <cit.>. In this task, we encode the GC content as number tokens in DNAGPT, allowing for joint training of numerical and sequence data and enabling DNAGPT to adapt to downstream tasks with numerical data as input and output. Furthermore, we adopt dynamic sequence length for the DNA sequence in this task, which allows the model to learn a dynamic receptive field and enables the downstream tasks with dynamic sequence length as input. We first calculate the GC content value of randomly selected sequences, which is an entirely unsupervised manner. The model should output this value after reading the entire sequence.
Sequence order prediction
The sequence order of DNA plays an important role in gene expression <cit.> and transcription <cit.>. For instance, sequence such as TATA box <cit.> and AATAAA PAS <cit.> often have to maintain a fixed order. We design a self-supervised sequence order prediction task, where we randomly reverse a sequence and let the model predict whether the sequence has been reversed or not. This task provides heuristic information for downstream tasks with order-sensitive sequences.
Since GPT models use unidirectional attention <cit.>, they can only infer and generate tokens from left to right. By reversing the DNA sequences, our model can infer tokens in both directions from the global perspective, improving its capability for downstream tasks for predicting preceding contexts.
§.§ Pre-training Loss
For the calculation of the loss in DNAGPT, as shown in Figure. <ref>. e, we illustrate the model input, output, and ground truth for DNAGPT during pre-training. The output of DNAGPT can be DNA tokens and/or number tokens. When calculating the loss for the next token prediction and sequence order prediction task, cross-entropy loss is used. For the GC ratio prediction task, mean squared error (MSE) loss is used since numerical tokens are involved. The final loss can be represented as:
Loss = λ× MSE_loss + Cross_entropy_loss
where MSE_loss indicates MSE loss and Cross_entropy_loss indicates Cross entropy loss. In the pre-training, the λ is set to 0.01.
§ GENOMIC SIGNALS AND REGIONS (GSR) RECOGNITION
Recognition of various genomic signals and regions (GSR) from DNA sequence is essential to the understanding of genomes. To address this issue, we finetune and evaluate our model on the recognition of polyadenylation signals (PAS) and translation initiation sites (TIS) of different organisms: human, mouse, bovine and fruit fly. To be specific, we follow the processing procedure in DeepGSR <cit.>. The DNA sequence lengths are set to 603 and 606 respectively for TIS and PAS recognition. DeepGSR extracted 20,933, 18,693, 12,082, and 27,203 true PAS data; and 28,244, 25,205, 17,558, and 30,283 true TIS for human, mouse, bovine, and fruit fly, respectively which are used as groud-truth. Then Deepgsr sampled a similar number of non-GSR sequences from the genome sequences and combined them with the true cases. The training set, validation set, and test set are divided in the ratio of 6:1.5:2.5. Details of the datasets are depicted in Section <ref>.
§.§ DNAGPT is able of recognizing GSRs from any species.
The recognition of GSR can be considered as a binary classification task. We evaluate DNAGPT on the recognition of both PAS (AATAAA variant and all variants) and TIS (with the ATG signal) in the human genome. We present the accuracy metric in Figure. <ref> a, which shows that our model can steadily outperform the previous state-of-the-art methods. We further provide additional metric results in the Table. <ref> and <ref> for a more comprehensive evaluation.
Notice that, GSRNET <cit.> utilizes the embedded features generated from the pre-trained DNABERT model. DNAGPT can significantly outperform the modified DNABERT in all the tasks. To verify the generalization of DNAGPT, we further evaluate our model on other organisms, including mouse, fruit fly and bovine. Experimental results are presented in Figure. <ref> b, c and d, respectively. Our DNAGPT outperform the GSRNET and DeepGSR in most cases, the later two were specially developed for GSR recognition.
§.§ DNAGPT recognizes GSRs based on non-coding regions.
To explore the inner relations behind DNAGPT’s ability to recognize GSRs, we visualize the attention map of the final layer in DNAGPT’s backbone. The input data is TIS or PAS (AATAAA) sequence from humans, respectively. As shown in Figure. <ref> e, we sample 300 bps before and after the TIS and PAS locations (green areas), which contain both coding and non-coding (yellow) regions. TIS is located right in front of the coding region. Therefore, we can observe that the primary reason DNAGPT-M can accurately identify TIS is that most of its attention is focused on the non-coding regions. So it is with the result of PAS recognition tasks. The attention maps of both cases adequately demonstrate that DNAGPT can recognize information in non-coding regions to identify GSRs.
§ MRNA EXPRESSION LEVEL PREDICTION
We then investigated whether DNAGPT could extract more abundant information from DNA sequences by attempting to predict the mRNA expression levels of corresponding promoters directly from genomic sequence information. Following Xpresso <cit.>, we utilize 18,377 and 21,856 promoters as well as the mRNA half-lives in human and mouse respectively and held out 1000 cases in each specie for testing. CAP-Analysis Gene Expression (CAGE) is used to refine the annotations. Xpresso utilize deep convolutional network to encode both promoters and the half-lives and predict the corresponding mRNA expression level and achieve a .
We use DNAGPT to predict the mRNA abundance under the same setting with Xpresso. As mentioned in the last line of Figure. <ref> b. We combined the promoter sequences with the mRNA half-lives in a single sequence to predict the expression level of the mRNA abundance. We present the r^2 (Coefficient of determination) metric in Figure. <ref> f. DNAGPT outperforms Xpresso from 0.59 to 0.62 for human mRNA abundance prediction and improved the results on the mouse species from 0.71 to approximately 0.73.
The input format of this task can be regarded as a unique special form in biology sequence analysis. Previously, these types of tasks could only be solved by specialized models designed by experts. However, the use of DNAGPT can obviate the need for designing more diverse and complex models, while also providing potential guidance for downstream tasks from pre-trained information.
§ ARTIFICIAL HUMAN GENOMES GENERATION
As the primitive task of the GPT model, we further investigate DNAGPT’s performance on the generation of artificial human genomes (AGs). AGs can be used to protect genetic privacy and reduce the cost of genetic sample collection. Following the work in <cit.>, we finetune our DNAGPT on 5008 haplotypes from 1000 Genomes data <cit.> which can be seen as the real genomes sequences and we use DNAGPT to generate 5000 AGs of 10000 Single Nucleotide Polymorphisms (SNPs) region for further analysis (can be seen as 5000 sequences each with a length of 10,000). We compared DNAGPT with the GAN and RBM models. The GAN model consists of a generator and a discriminator network, where the output of the generator and the input of the discriminator both have the size of the number of SNPs. For the RBM model, we use the RBM model provided in <cit.>. All the training and testing strategy of GAN and RBM remains the same with <cit.>. We use the real 5008 haplotypes for the comparisons for all the methods (GAN, RBM, DNAGPT).
§.§ Analysis of artificial human genomes
We evaluate DNAGPTs and comparison methods from the following perspectives: principal components (PC) <cit.>; allele frequency (AF) <cit.>, linkage disequilibrium (LD) <cit.> and Pairwise haplotype distances. The evaluation metrics include Wasserstein distances <cit.> and correlation (r^2).
Principal components
We conduct the principal component analysis (PCA) on the AGs generated from GAN, RBM, and DNAGPT. We show the value distribution of the first six principal components using isoline map in Figure. <ref> a. Results show that the distributions of AGs generated from all methods roughly align with those of the real human genomes, while DNAGPT model demonstrate the most similar distribution of the real sequences. We further compute the Wasserstein distance (lower is better) between distributions of AGs and real genome sequence, which are 1.753. 3.432, 1.131 for GAN, RBM, DNAGPT, respectively.
Allele frequency
Allele frequency analysis is a genetic analysis method used to determine the frequency of different alleles of a gene locus. The allele frequency at a polymorphic site depends on the variation of that site in all the cases. In this analysis, we detect the frequency of SNPs within the 5,000 AGs from all the methods as well as the 5008 real AGs. We conduct the analysis to the sequences generated by all the models. As shown in Figure. <ref> b, only RBM has demonstrated a much diverse frequency distribution with a correlation of 0.94 compared to the frequency from the real AGs. the DNAGPT and GAN perform stably with a correlation of 0.99. We then visualize the correlation of those sites with allele frequency less than 0.2. As shown in Figure. <ref> c, DNAGPT outperform GAN (0.94) and RBM (0.83) with a correlation of 0.96, indicating that DNAGPT can still capture the information even from low-frequency alleles.
Linkage disequilibrium
Linkage disequilibrium (LD) is a phenomenon in population genetics which can be defined as the correlations of frequencies of two or more genetic markers (like alleles or genes). We further analyze the LD for all the generated sequences and real sequences. The first and second panels of Figure. <ref> a illustrates the difference of LD values between human genomes generated by GAN and RBM compared to real genomes, respectively, where the third panel show the result from DNAGPT. In these panels, the lighter the color, the more similar the LD heat map is to real genomes. Among them, the LD of DNAGPT is slightly weaker than that of real genomes, while GAN and RBM are stronger than the original genomes. Overall, the heat map perfor- mance of DNAGPT is better than GAN and RBM, as their colors are lighter. The above conclusions can also be verified through a comparison of correlation values. We present the correlation distributions in Figure. <ref> b. The correlation between the LDs of real and generated sequences from GAN and RBM is 0.92 and 0.94 and DNAGPT can achieve the score of 0.98.
Pairwise haplotype distances analysis
Pairwise haplotype distances refer to the genetic distances between different haplotypes within a genome. When calculating the distances, we typically compare the differences in the alleles at the corresponding loci between two haplotypes. In this analysis, we first calculate the pairwise distance distributions with-in each cluster of generated genomes (GAN vs GAN, RBM vs RBM, DNAGPT vs DNAGPT), defined as Within-cluster, then the pairwise distance distributions between real genomes and generated genomes by each method (GAN vs Real, RBM vs Real, DNAGPT vs Real) are defined as Between-cluster. Then we calculate the Wasserstein distances between the two types of distributions with the with-in distribution of real genomes (Real vs Real). We present the Wasserstein distances of within-cluster in Figure. <ref> c. Among them, the GAN’s distribution has the largest gap compared to the actual distribution with a value of 108.15, followed by DNAGPT with a value of 71.04. The genomes generated by RBM have the smallest discrepancy with a value of 30.21 from real genomes. The Between-cluster reflects the discrepancy between the pairwise distance distribution of genomes generated by each method and real genomes. The genomes generated by DNAGPT are the most similar to the real genomes with a value of 28.63, while RBM performs the worst, followed closely by GAN.
§.§ Generation temperature of DNAGPT can influence the quality of generated genomes
When a trained DNAGPT generates the DNA sequence, we can control the random- ness of the output sequence by adjusting the generation temperature. The generation temperature ranges from 0 to infinity. The higher the generation temperature, the more random the generated sequence will be. In the experiments mentioned earlier, our default generation temperature was 0.8. In this section, we will adjust the generation temperature to 1.2 to evaluate the performance of DNAGPT under different genera- tion temperatures. The results are shown in the Figure. <ref> a and b. Figure. <ref> a shows the Wasserstein distance, correlations of allele frequency, and correlations of linkage disequilibrium with real distribution. Figure. <ref> b shows Wasserstein distance of pairwise haplotype distance distribution (within-cluster and between-cluster). We can find that a larger generation temperature allows DNAGPT to maintain the correlation of allele frequency and linkage disequilibrium virtually unchanged while increasing the distance from the real distribution. It also increases the Wasserstein distance of pairwise haplotype distance distribution, indicating that a larger generation temperature makes the generated DNA sequences more diverse, and the gap from the original distribution will slightly increase. Therefore, users can adjust the generation temperature according to their needs, thereby controlling the diversity and authenticity of the generated sequences.
§ COMPARISONS OF DIFFERENT VERSIONS OF DNAGPT
In this section, we compared the results of three different DNAGPT variations. We conducted comparisons in GSR prediction, mRNA expression level prediction, and artificial human genomes generation task. We report the results in Figure. <ref>.
In GSR prediction task, we compared the three different DNAGPT variations in Figure. <ref> c. It can be seen that as the amount of pre-training data increases (Human reference genomes - reference genomes from 9 species - reference genomes from all mammals), the performance of downstream tasks also improves. This phenomenon can also be observed in the mRNA expression level prediction task. In the Figure. <ref> d, although DNAGPT-M and DNAGPT-H are neck-and-neck in the human mRNA expression level prediction task, DNAGPT-M performs better than DNAGPT-H in the mouse mRNA expression level prediction task.
We further compared DNAGPT-H and DNAGPT-M in the artificial human genomes generation task. In the Figure. <ref> e, the correlations of allele frequency for the genomes generated by DNAGPT-M and DNAGPT-H are almost the same, with DNAGPT-M being slightly better at 0.96 compared to DNAGPT-H at 0.95. For the Correlations of LD of genomes, as can be seen from the Figure. <ref> f, both DNAGPT-M and DNAGPT-H maintain an excellent level with a value of 0.98. From this, we further investigated the performance level of LD when considering different distances between SNPs. The Figure. <ref> g shows that both DNAGPT variations fit the real data distribution better than GAN and RBM, with DNAGPT-M being slightly better than DNAGPT-H.
§ DISCUSSION
In summary, we have developed a unidirectional attention model called DNAGPT for DNA sequence analysis to accommodate various types of downstream tasks across multiple species. We conducted the pre-training on reference genomes from as many as 9 different species. Meanwhile, we introduced joint training of numbers and sequences during the pre-training process. In order to better encode the relationships between inputs and outputs for versatile task formats, we designed a set of token languages to incorporate sequence, number and control tokens. For the pre-training tasks, to better understand the uniqueness of DNA sequences, in addition to the next- token prediction task in GPT, we also introduced two pre-training tasks: GC content prediction and sequence order prediction. Finally, we utilized the token language to compile mixed inputs and outputs of DNA sequences and numerical properties.
In the genomic signals and regions recognition task, when given a DNA sequence, DNAGPT can determine with higher accuracy whether the sequence is a genuine genomic signal or region. Furthermore, DNAGPT can also handle joint inputs of DNA sequences and mRNA half-lives to predict mRNA expression levels. In the Artificial human genomes generation task, the AGs generated by DNAGPT rank highly in a variety of evaluation metrics, indicating that DNAGPT effectively comprehend the underlying relationships and information within genomes.
While our model provides a general approach to analyze the DNA sequence, several limitations should be noticed and further investigated which can be summarized from the following perspectives:
Multi-omics data
Currently, DNAGPT can only work with DNA sequence. An extension to multi-omics and spatial-omics data will greatly enhance the application DNAGPT. The joint training and analysis can elicit the emergence of more potential information and can handle a wider variety of downstream tasks, thereby better serving biological research.
Multi-modal data
Whether we're analyzing the genome, the transcriptome, or the proteome, we approach macroscopic phenomena from a microscopic perspective. Hence, we should also incorporate information from multi-modal data in a single model, such as pathology tissue images (images) and disease diagnostic reports (text), to better introduce information from different perspectives for a joint analysis of the biological task.
Long sequence analysis
As a unique characteristic of biological data, long sequence data analysis tasks are very common in biological research, such as the distal regulation of genes. Hence, how to process the input and output with a longer sequence length can be further explored, this can be relieved by replacing with a memory efficient model structure like RWKV <cit.>, RetNet <cit.>.
Efficient adaptation of DNAGPT
Another issue is about the efficient adaptation of DNAGPT, users may not have enough resources to fine-tune the DNAGPT. While several methods provides solutions <cit.> for the efficiently training of foundation models, those techniques can be further tested and developed via DNAGPT. Moreover, we will further study the zero-shot adaptation or so-called 'emergent' ability in biology foundation models, allowing users to obtain the results directly without the need for fine-tuning.
§ METHODS
Pre-training of DNAGPT
For DNAGPT-H, we collect the reference genomes from the Ensembl database <cit.> with a total amount of 3 billion bps. During the data sampling stage, we employed a non-overlapped k-mers sampling strategy to handle DNA sequence data. While sampling, we removed sequences with an 'N'(denoted as "not detected") content ratio greater than 0.05. Moreover, we performed random flipping with a probability of 0.5. we then encoded each input DNA sequence and numerical information according to the token language and the pre-training tasks we designed. DNAGPT-H consists of 12 layers of transformer blocks based on unidirectional attention, with each layer containing 12 attention heads and a hidden layer size of 768. The trained parameters in the model is 0.1 billion. The learning rate is set to 1e-4 with a cosine decay scheduler. The weight decay is set to 1e-2. The optimizer we choose is AdamW with the betas is set to (0.9, 0.95) and momentum is set to 0.937. We employed mixed precision for pre-training. The model was pre-trained for 15 epochs. The pre-training of the model on 8 Nvidia V100 32GB GPUs takes approximately one day.
For DNAGPT-M, we collected reference genome information of 9 species from the Ensembl database <cit.>, including arabidopsis_thaliana, caenorhabditis_elegans, bos_taurus, danio_rerio, drosophila_melanogaster, escherichia_coli_gca_001721525, homo_sapiens, mus_musculus, saccharomyces_cerevisiae. Subsequently, we removed the mitochondrial genomes from the majority of the species in the preprocessing procedure. After preprocessing, the number of bps in the genome of each species is: arabidopsis_thaliana (119146348 bps), caenorhabditis_elegans (100272607 bps), bos_taurus (2628394923 bps), danio_rerio (1345101833 bps), drosophila_melanogaster (137547960 bps), escherichia_coli_gca_001721525 (5176750 bps), homo_sapiens (3088286401 bps), mus_musculus (2723414844 bps), saccharomyces_cerevisiae (12071326 bps). The total amount of bps is 10159412992. The architecture and training strategies are the same with DNAGPT-H.
Similar to DNAGPT-M, DNAGPT-S-512 used the same model as well as the hyperparameters, but the pre-training data changed from genomes of 9 species to the reference genomes of all the mammals with a total amount of approximately 200 billion bps. DNAGPT-S-512 is trained on the data for 2 epochs and takes approximately one week to finish the pre-training stage.
Non-overlapping k-mers tokenization
A k-mer strategy composes k consecutive nucleotides into one token. Previous k-mers methods often adopt overlapped tokenization, that is, regardless of the value of k, the shift during each sampling is always 1, resulting in (N+k-1) tokens for a N-length sequence. In the non-overlapped k-mers strategy, the shift is equal to K, resulting in N/k tokens for a N-length sequence and improving the efficiency by k times.
A k-mer consists of k nucleotides. When sampling with a kmer-based strategy, previous methods involved overlapped sampling, that is, regardless of the value of k, the shift during each sampling is always 1. With a fixed input token count N in the model, the effective length of the DNA sequence input using this strategy is N-k+1. In the non-overlapped k-mers strategy, the shift is equal to K. In this way, under the same input token length N, the input base length of the model can be extended from (N+k-1) to (k × N), effectively increasing the DNA sequence input length by approximately k times compared to the current model.
Fine-tuning of DNAGPT
When fine-tuning DNAGPTs, Firstly, we should set the input sequence information to organize the data and initialize the model, and the model can automatically initialize suitable encoding heads. For example, for classification and generation tasks, the sequence embedding and classification heads are activated for input and output. For regression tasks and more complex composite tasks, DNAGPT first compose the input for joint embeddings and then select regression heads for task output. After the embedding layer and task heads are set, we load the pre-trained weights into the model, and the weights of unused heads will be discarded. Then we can fine-tune DNAGPTs using data from the downstream tasks. We use the same hyperparameters across all downstream tasks. Hyperparameters used for fine-tuning were as follows: max learning rate, 5 × 10^-5; learning scheduler, cosine with warmup; optimizer, AdamW; warmup epoch, 5; weight decay, 1e-1; batch size, 8;
In the genomic signals and regions recognition, we use the sequence embedding and classification head. The evaluation metrics are ACC (Accuracy), F1 (F1 score), MCC (Matthews Correlation Coefficient), Precision and Recall. We report the complete results in the Table. <ref>. In mRNA expression levels prediction, both the sequence embedding and the number embedding are invoked to handle the input of sequences and numbers. For the output, the regression head is used to predict the expression level. In artificial human genomes generation, only the sequence embedding and classification head are used to handle input and output sequences. During fine-tuning, we add a stop symbol at the last position of the input sequence . When generating sequences, we remove all sequences that do not have the stop symbol or those with incorrect stop symbol positions in the post-processing step. For temperature adjustment, we keep the training epoch and other hyper-parameters unchanged.
§ SUPPLEMENTARY
§.§ Other results of DNAGPTs on genomic signals and regions recognition
Full results of DNAGPTs on genomic signals and regions recognition
We show in the Table. <ref> the results of DNAGPT-M on various datasets of GSR recognition task, and the results of DNAGPT-S-512 in the Table. <ref>. Bothe of the DNAGPTs demonstrates stable results across different GSR recognition datasets from various species and the performance of DNAGPT-S-512 is the best..
Attention maps of DNAGPT-M
We show the attention map of each layer in DNAGPT-M in Figure <ref> a. The input sequence is PAS (AATAAA) sequence where the PAS site is located in the middle of the sequence. We can observe that almost all layers focus on the latter half of the area, with shallow and deep layers having a more widespread attention compared to the middle layers. We can also notice that the attention map of the shallow areas is smoother than that of the deep areas. Although the attention range of the deep layers is as extensive as those of the shallow layers, the deep networks tend to focus on a few specific tokens rather than presenting a smooth state like the shallow attention map. This indicates that some regions in non-coding areas may be more critical for PAS recognition compared to other areas. We have also displayed the attention map for each layer with TIS data. In the Figure. <ref> b, we display the attention maps of each layer of DNAGPT-M with TIS input. Interestingly, compared to the attention map with PAS as input, the information focused on by the model in the shallow layers is more consistent, with a notable difference only in Layer 1. In the later layers, the attention map for TIS input starts to focus on information from tokens in earlier positions, i.e., non-coding region information. This suggests that the information the model focuses on in the shallow layers is more approximate, but in the deep networks, it can more precisely pinpoint the locations of important tokens.
§.§ All tokens used in DNAGPT
There are 6 categories of tokens in the token language of DNAGPT. The Sequence tokens are the DNA sequences encoded with kmers tokenization strategy. For example, if we utilize 6-mers sampling and only consider the encoding of ’A, C, G, T, N’, then the total amount of discrete tokens are 5^6 + 5^5 +5^4 +5^3 +5^2 +5^1 which is 19530. When comes to the Number tokens, we directly input the numbers into the Numerical embedding layer and Regression head layer to encode and decode them as the number tokens. For binary classification tasks, we utilize 'A' and 'N' to distinguish True from False. The Instruction tokens are used to identify the input and output type of sequence. For DNA sequence from different species, we assign an instruction token for each species. Specifically, we also assign instruction tokens for Classification tasks and Numerical tokens which can prompt the model to generate corresponding types of tokens separately. In biological sequences, there is no natural logical relationship between tokens like in the natural language. In the design of DNAGPT tokens, to enable the model to understand the relationships among sequences, we design two connection tokens to guide the relationships between sequences before and after the connection tokens. Here, ’+’ represents the fusion of preceding and succeeding information, and ’=’ represents the cause-effect relationship, with the input being before ’=’ and the output being after ’=’. Finally, in order to better adapt to different types of downstream tasks, we also reserve some special tokens.
§.§ Datasets
§.§.§ Genomic signals and regions recognition
The datasets used for genomic signals and regions recognition are cDNA data. We extracted both polyadenylation signals (PAS) and translation initiation sites (TIS) from four genomes. For the Homo sapiens (human) genome, the human assembly GRCh37 (also known as hg19) was employed, while the primary assembly GRCm38 was used for the Mus musculus (mouse) genome. The cDNA data for these genomes were sourced from the Mammalian Gene Collection (MGC). For the Bos taurus (bovine) genome, the assembly Bos_taurus_UMD_3.1.1 was utilized, with the cDNA data being downloaded from the Ensembl organization. Finally, for the Drosophila melanogaster (fruit fly) genome, Release_6 – annotation release Dmel_Release_6.01 was employed, and the cDNA data was obtained from FlyBase. The sampling method is as follows: first, locate the positions of GSRs, then extract 300 bps of sequence from both before and after the GSRs, and concatenate them together. It is important to note that the GSR motif will be removed during preprocessing to ensure that the model can recognize GSRs based solely on the information near the GSR motif, rather than the GSR itself. For the negative samples, the sampled sequences should satisfied the following requirements:
(1) Sequences with the same motifs but not related to polyadenylation and translation processes.
(2) Sequences are sampled from the chromosome whose average GC-content was nearest to the entire genome's average GC-content.
Consequently, negative data for human, mouse, bovine, and fruit fly were extracted from chromosomes 21, 13, 28, and X, respectively.
The amounts of positive samples for each datasets are shown in Table. <ref>.
§.§.§ Artificial human genomes generation
For artificial human genomes generation, we utilized 1000 Genomes data <cit.> as the fine-tuning dataset. There are 2504 individuals (5008 haplotypes) in the dataset and the data we used is a dense 10000 SNP range/region from chromosome 15. When evaluating, the model produced 5000 sequences of SNPs for analysis. All our analyses were conducted on the generated data.
§.§.§ mRNA expression levels prediction
The dataset is composed of human protein-coding gene sequences located upstream and downstream of the transcription start site (TSS). The promoter of the gene is found in the sequence upstream of the TSS, while the exons and introns of the gene are found downstream. The input sequences are sourced from the Xpresso<cit.>. In this dataset, the TSS positions were meticulously revised by the authors of Xpresso using Cap Analysis Gene Expression (CAGE) <cit.>, a technique for determining the actual TSS location. The Xpresso dataset consists of 18,377 promoters, divided into 16,377 for training, 1,000 for validation, and 1,000 for testing as mentioned in the Xpresso<cit.>. The maximum length of a promoter's TSS sequence is set to 20,000 base pairs. The default sample range in xpresso is from 3000 to 13500 when DNAGPT can utilize the whole sequence. Additionally, the Xpresso DNA input includes half-life features that provide general information about the gene, such as gene length and the number of introns. The default feature input is an 8-bit array.
|
http://arxiv.org/abs/2307.04742v1 | 20230710175309 | Parallel Tempered Metadynamics: Overcoming potential barriers without surfing or tunneling | [
"Timo Eichhorn",
"Gianluca Fuwa",
"Christian Hoelbling",
"Lukas Varnhorst"
] | hep-lat | [
"hep-lat"
] |
WUB/23-00
[email protected]
[email protected]
[email protected]
[email protected]
Department of Physics
University of Wuppertal
Gaußstraße 20, 42119 Wuppertal, Germany
At fine lattice spacings, Markov chain Monte Carlo simulations of QCD and other gauge theories are plagued by slow (topological) modes that give rise to large autocorrelation times. These, in turn, lead to statistical and systematic errors that are difficult to estimate. Here, we demonstrate that for a relevant set of parameters considered, Metadynamics can be used to reduce the autocorrelation times of topological quantities in 4-dimensional SU(3) gauge theory by at least two orders of magnitude compared to conventional update algorithms.
However, compared to local update algorithms and the Hybrid Monte Carlo algorithm, the computational overhead is significant, and the required reweighting procedure may considerably reduce the effective sample size. To deal with the latter problem, we propose modifications to the Metadynamics bias potential and the combination of Metadynamics with parallel tempering. We test the new algorithm in 4-dimensional SU(3) gauge theory and find, that it can achieve topological unfreezing without compromising the effective sample size. Preliminary scaling tests in 2-dimensional U(1) gauge theory show these modifications lead to improvements of more than an order of magnitude compared to standard Metadynamics, and an improved scaling of autocorrelation times with the lattice spacing compared to standard update algorithms.
Parallel Tempered Metadynamics: Overcoming potential barriers without surfing or tunneling
Lukas Varnhorst
August 12, 2023
==========================================================================================
§ INTRODUCTION
In recent years, physical predictions based on lattice simulations have reached sub-percent accuracies <cit.>. With ever-shrinking uncertainties, the need for precise extrapolations to the continuum grows, which in turn necessitates ever finer lattice spacings. Current state-of-the-art methods for simulations of lattice gauge theories either rely on a mixture of heat bath <cit.> and overrelaxation <cit.> algorithms for pure gauge theories, or molecular-dynamics-based algorithms like the Hybrid Monte Carlo algorithm (HMC) <cit.> or variations thereof for simulations including dynamical fermions. For all these algorithms, the computational effort to carry out simulations dramatically increases at fine lattice spacings due to critical slowing down. While the exact behavior depends on a number of factors, such as the update algorithms, the exact discretization of the action, and the choice of boundary conditions, the scaling of the integrated autocorrelation times with the inverse lattice spacing can usually be described by a power law.
In addition to the general diffusive slowing down, topologically non-trivial gauge theories may exhibit topological freezing <cit.>. This effect appears due to the inability of an algorithm to overcome the action barriers between topological sectors, which can lead to extremely long autocorrelation times of topological observables and thus an effective breakdown of ergodicity.
Over the years, several strategies have been developed to deal with this situation. On the most basic level, it has become customary, in large scale simulations, to monitor the topological charge of the configurations in each ensemble, thus avoiding regions of parameter space, which are affected by topological freezing <cit.>. Another possibility to circumvent the problem consists in treating fixed topology as a finite volume effect and either correcting observables for it <cit.>, or increasing the physical volume sufficiently to derive the relevant observables from local fluctuations <cit.>. It is also possible to use open boundary conditions in one lattice direction <cit.>, which invalidates the concept of an integer topological charge for the prize of introducing additional boundary artifacts and a loss of translational symmetry.
Despite the success of these strategies in many relevant situations, the need for a genuine topology changing update algorithm is still great. This is evident from the large number and rather broad spectrum of approaches that are currently being investigated in this direction. Some of these approaches address critical slowing down in general, whereas others focus particularly on topological freezing. These approaches include parallel tempering <cit.>, modified boundary conditions <cit.> and combinations of both <cit.>; multiscale thermalization <cit.>, instanton(-like) updates <cit.>, Metadynamics <cit.>, Fourier acceleration <cit.> and trivializing maps <cit.>, also in combination with machine learning <cit.>. For a recent review, see e.g. <cit.>. Additionally, recent years have seen multitudinous efforts to use generative models to sample configurations <cit.>.
In this work, we propose a new update algorithm, parallel tempered Metadynamics, or PT-MetaD for short, and demonstrate its efficiency in 4-dimensional SU(3) at parameter values, where conventional update algorithms suffer from topological freezing. In its basic variant, which we present here, PT-MetaD consists of two update streams simulating the same physical system. One of the streams is an efficient, conventional algorithm, while the second one includes a bias potential that facilitates tunneling between topological sectors. At regular intervals, swaps between the two streams are suggested, so that the good topological sampling from the second stream carries over to the first one. The algorithm thus combines ideas from parallel tempering <cit.>, Metadynamics <cit.> and multicanonical simulations <cit.>, leading to an efficient sampling of topological sectors while avoiding the problem of small effective sample sizes, which is usually associated with reweighting techniques such as Metadynamics or multicanonical simulations. Additionally, the inclusion of fermions into PT-MetaD is conceptually straightforward.
This paper is organized as follows. We start out by giving a general introduction to Metadynamics in <Ref>. Afterwards, <Ref> describes our simulation setup, including our choice of actions, observables, and update algorithms. Some details on the application of Metadynamics in the context of SU(3) gauge theory are also given. In <Ref>, we present baseline results obtained with conventional update algorithms, including a rough determination of gradient flow scales for the DBW2 action. In <Ref> we present results obtained with pure Metadynamics for 4-dimensional SU(3), and discuss several possible improvements. In <Ref> we introduce parallel tempered Metadynamics and show some scaling tests of the new algorithm in 2-dimensional U(1) gauge theory, as well as exploratory results in 4-dimensional SU(3). Finally, in <Ref>, we conclude with a summary and outlook on the application of the new algorithm to full QCD.
§ METADYNAMICS
Consider a system described by a set of degrees of freedom {U}, where the states are distributed according to the probability density
p(U) = 1/Z e^-S(U),
with the partition function Z defined as
Z = ∫𝒟[U] e^-S(U).
The expectation value of an observable O is defined as
⟨ O ⟩ = ∫𝒟[U] p(U) O(U).
In the context of lattice gauge theories, the integration measure 𝒟[U] is usually the product of Haar measures for each link variable, but more generally 𝒟[U] may be understood as a measure on the configuration space of the system.
Metadynamics <cit.> is an enhanced-sampling method, based on the introduction of a history-dependent bias potential V_t(s(U)). This potential is introduced by replacing the action S(U) with S^M_t(U) = S(U) + V_t(s(U)), where t is the current simulation time. This potential modifies the dynamics of the system and depends on a number of observables s_i(U), with i ∈{1, …, N }, that are referred to as collective variables (CVs). These CVs span a low-dimensional projection of the configuration space of the system, and may generally be arbitrary functions of the underlying degrees of freedom {U}. However, when used in combination with molecular-dynamics-based algorithms such as the Hybrid Monte Carlo algorithm, the CVs need to be differentiable functions of the underlying degrees of freedom. During the course of a simulation, the bias potential is modified in such a way as to drive the system away from regions of configuration space that have been explored previously, eventually converging towards an estimate of the negative free energy as a function of the CVs, up to a constant offset <cit.>. Usually, this is accomplished by constructing the potential from a sum of Gaussians g(s), so that at simulation time t, the potential is given by
V_t(s) = ∑_t' ≤ t∏_i=1^N g(s_i - s_i(t')).
The exact form of the Gaussians is determined by the parameters w and δ s_i:
g(s_i) = w exp(-s_i^2/2 δ s_i^2).
Both parameters affect the convergence behavior of the potential in a similar way: Increasing the height w or the widths δ s_i may accelerate the convergence of the potential during early stages of the simulation, but lead to larger fluctuations around the equilibrium during later stages. Furthermore, the widths δ s_i effectively introduce a smallest scale that can still be resolved in the space spanned by the CVs, which needs to be sufficiently small to capture the relevant details of the potential.
If the bias potential has reached a stationary state, i.e., its time-dependence in the region of interest is just an overall additive constant, the modified probability density, which we shall also refer to as target density, is given by
p'(U) = 1/Z' e^-S(U) - V(s(U)),
with the modified partition function
Z' = ∫𝒟[U] e^-S(U) - V(s(U)).
Expectation values with respect to the modified distribution can then be defined in the usual way, i.e., via
⟨ O ⟩' = ∫𝒟[U] p'(U) O(U).
On the other hand, expectation values with respect to the original, unmodified probability density can be written in terms of the new probability distribution with an additional weighting factor. For a dynamic potential, there are different reweighting schemes to achieve this goal <cit.>, but if the potential is static, the weighting factors are directly proportional to the exponential of the bias potential:
⟨ O ⟩ = ∫𝒟[U] p'(U) O(U) e^V(s(U))/∫𝒟[U] e^V(s(U)).
The case of a static potential is thus essentially the same as a multicanonical simulation <cit.>.
In situations where the evolution of the system is hindered by high action barriers separating relevant regions of configuration space, Metadynamics can be helpful in overcoming those barriers, since the introduction of a bias potential modifies the marginal distribution over the set of CVs. For conventional Metadynamics, the bias potential is constructed in such a way that the marginal modified distribution is constant:
p'(s_i) = ∫𝒟[U] p'(U)δ(s_i-s_i(U))=const.
Conversely, for a given original distribution p(Q) and a desired target distribution p'(Q), the required potential is given by:
V(s) = log(p'(s)/p(s))
However, it should be noted that even if the bias potential completely flattens out the marginal distribution over the CVs, the simulation is still expected to suffer from other (diffusive) sources of critical slowing down as is common for Markov chain Monte Carlo simulations.
§ SIMULATION SETUP AND OBSERVABLES
§.§ Choice of gauge actions
For our simulations of SU(3) gauge theory, we work on a 4-dimensional lattice Λ with periodic boundary conditions. Configurations are generated using the Wilson <cit.> and the DBW2 <cit.> gauge action, both of which belong to a one-parameter family of gauge actions involving standard 1 × 1 plaquettes as well as 1 × 2 planar loops, which may be expressed as
S_g = β/3∑_x ∈Λ( ∑_μ < ν c_0 ( 3 - [𝒲_μ, ν(x)] )
+ ∑_μ≠ν c_1 ( 3 - [𝒲_μ, 2ν(x)] ) ).
Here, 𝒲_k μ, l ν(x) refers to a Wilson loop of shape k × l in the μ-ν plane originating at the site x. The coefficients c_0 and c_1 are constrained by the normalization condition c_0 + 8 c_1 = 1 and the positivity condition c_0 > 0, where the latter condition is sufficient to guarantee that the set of configurations with minimal action consists of locally pure gauge configurations <cit.>. For the Wilson action (c_1 = 0), only plaquette terms contribute, whereas the DBW2 action (c_1 = -1.4088) also involves rectangular loops.
It is well known that the critical slowing down of topological modes is more pronounced for improved gauge actions in comparison to the Wilson gauge action <cit.>: A larger negative coefficient c_1 suppresses small dislocations, which are expected to be the usual mechanism mediating transitions between topological sectors on the lattice. Among the most commonly used gauge actions, this effect is most severely felt by the DBW2 action. In previous works <cit.>, local update algorithms were found to be inadequate for exploring different topological sectors in a reasonable time frame. Instead, the authors had to generate thermalized configurations in different topological sectors using the Wilson gauge action, before using these configurations as starting points for simulations with the DBW2 action. Thus, this action allows us to explore parameters where severe critical slowing down is visible, while avoiding very fine lattice spacings and thereby limiting the required computational resources.
§.§ Observables
The observables we consider here are based on various definitions of the topological charge, and Wilson loops of different sizes at different smearing levels. The unrenormalized topological charge is defined using the clover-based definition of the field-strength tensor:
Q_c = 1/32 π^2∑_x ∈Λϵ_αβγδ[F^clov_αβ(x) F^clov_γδ(x)]
This field-strength tensor is given by
F^clov_αβ(x) = -i/8(C_μν(n) - C_νμ(n) ),
where the clover term C_αβ(x) is defined as
C_αβ(x) = P_α, β(x) + P_β, -α(x)
+ P_-α, -β(x) + P_-β, α(x),
P_α, β(x) denotes the plaquette:
P_α, β(x) = U_α(x) U_β(x + α̂) U_α^†(x + β̂) U_α^†(x)
Alternatively, the topological charge may also be defined via the plaquette-based definition, here denoted by Q_p:
Q_p = 1/32 π^2∑_x ∈Λϵ_αβγδ[F^plaq_α, β(x) F^plaq_γ, δ(x)]
Similar to the clover-based field-strength tensor, F^plaq_α, β(x) is defined as:
F^plaq_αβ(x) = -i/2(P_μ, ν(n) - P_ν, μ(n) )
Note that both Q_c and Q_p formally suffer from 𝒪(a^2) artifacts, although the coefficient is typically smaller for the clover-based definition Q_c. The topological charge is always measured after applying 𝒪(30) steps of stout smearing <cit.> with a smearing parameter ρ = 0.12. To estimate the autocorrelation times of the system, it is also useful to consider the squared topological charge <cit.>. Additionally, we also consider the Wilson gauge action and n × n Wilson loops for n ∈{2, 4, 8} at different smearing levels. We denote these by S_w and 𝒲_n respectively.
§.§ Update algorithms
Throughout this work, we employ a number of different update schemes: To illustrate critical slowing down of conventional update algorithms and to set a baseline for comparison with Metadynamics-based algorithms, we use standard Hybrid Monte Carlo updates with unit length trajectories (1HMC), a single heat bath sweep (1HB), five heat bath sweeps (5HB), and a single heat bath sweep followed by four overrelaxation sweeps (1HB+4OR). The local update algorithms are applied to three distinct SU(2) subgroups during each sweep <cit.>, and the HMC updates use an Omelyan-Mryglod-Folk fourth-order minimum norm integrator <cit.> with a step size of ϵ = 0.2, which leads to acceptance rates above 99% for the parameters used here.
We compare these update schemes to Metadynamics HMC updates with unit length trajectories (MetaD-HMC), and a combination of parallel tempering with Metadynamics (PT-MetaD) which is discussed in more detail in <Ref>.
An important requirement for the successful application of Metadynamics is the identification of appropriate CVs. In our case, the CV should obviously be related to the topological charge. However, it should not always be (close to) integer-valued, but rather reflect the geometry of configuration space with respect to the boundaries between topological sectors. On the other hand, the CV needs to track the topological charge closely enough for the algorithm to be able to resolve and overcome the action barriers between topological sectors. A straightforward approach is to apply only a moderate amount of some kind of smoothing procedure, such as cooling or smearing, to the gauge fields before measuring the topological charge. Since these smoothing procedures involve some kind of spatial averaging, the action will become less local, which complicates the use of local update algorithms. Therefore, we use the HMC algorithm to efficiently update the entire gauge field at the same time, which requires a differentiable smoothing procedure such as stout <cit.> or HEX smearing <cit.>. Due to its simpler implementation compared to HEX smearing, we choose stout smearing here. Previous experience <cit.> seems to indicate that four to five stout smearing steps with a smearing parameter ρ = 0.12 strike a reasonable balance between having a smooth CV and still representing the topological charge accurately.
The force contributed by the topological bias potential may be written in terms of the chain rule:
F_μ, meta(x) = - ∂ V_meta/∂ Q_meta∂ Q_meta/∂ U^(n)_μ_n(x_n)
×∂ U^(n)_μ_n(x_n)/∂ U^(n-1)_μ_n-1(x_n-1)…∂ U^(1)_μ_1(x_1)/∂ U_μ(x)
Here we have introduced the notation V_meta for the bias potential and Q_meta for the CV to clearly distinguish it from other definitions of the topological charge. The first term in the equation, corresponding to the derivative of the bias potential with respect to Q_meta, is trivial, but the latter two terms are more complicated: The derivative of Q_meta with respect to the maximally smeared field U^(n) is given by a sum of staples with clover term insertions, and the final term corresponds to the stout force recursion <cit.> that also appears during the force calculation when using smeared fermions. Note that in machine learning terminology, this operation is essentially a backpropagation <cit.> and may be computed efficiently using reverse mode automatic differentiation. More details on the calculation of the force can be found in <Ref>.
The bias potential is constructed from a sum of one-dimensional Gaussians, as described in <Ref>, and stored as a histogram. Due to the charge conjugation symmetry, we can update the potential symmetrically. Values at each point are reconstructed by linearly interpolating between the two nearest bins, and the derivative is approximated by their finite difference. To limit the evolution of a system to relevant regions of the phase space, it is useful to introduce an additional penalty term to the potential once the absolute value of Q_meta has crossed certain thresholds Q_min and Q_max. If the system has exceeded the threshold, the potential is given by the outermost value of the histogram, plus an additional term that scales quadratically with the distance to the outer limit of the histogram.
Unless mentioned otherwise, we have used the following values as default parameters for the potential: Q_max/min = ±8, n_bins = 800, w = 0.05, while δ Q^2 has always been set equal to the bin width, i.e., (Q_max - Q_min) / n_bins.
Note that it is often convenient to build up a bias potential in one or several runs, and then simulate and measure with a static potential generated in the previous runs. In some sense, this can be thought of as a combination of Metadynamics and multicanonical simulation.
§ RESULTS WITH CONVENTIONAL UPDATE ALGORITHMS
To establish a baseline to compare our results to, we have investigated the performance of some conventional update algorithms using the Wilson and DBW2 gauge actions. Furthermore, we have made a rough determination of the gradient flow scales t_0 and w_0 for the DBW2 action. Some preliminary results for the Wilson action were already presented in <cit.>.
§.§ Critical slowing down with Wilson and DBW2 gauge actions
In order to study the scaling of autocorrelations for different update schemes, we have performed a series of simulations with the Wilson gauge action on a range of lattice spacings. The parameters were chosen in such a way as to keep the physical volume approximately constant at around (1.1)^4, using the scale given by the rational fit function in <cit.>, which was based on data from <cit.>. A summary of the simulation parameters can be found in <Ref>.
Since autocorrelation times near second-order phase transitions are expected to be described by a power law, we use the following fit ansatz in an attempt to parameterize the scaling:
τ_int = c (a/r_0)^z
All autocorrelation times and their uncertainties are estimated following the procedure described in <cit.>.
<Ref> shows the scaling of the integrated autocorrelation times of 2 × 2 Wilson loops 𝒲_2 and the square Q_c^2 of the clover-based topological charge with the lattice spacing. Additionally, the figure also includes power law fits to the data and the resulting values for the dynamical critical exponents z(𝒲_2) and z(Q_c^2). Both observables were measured after 31 stout smearing steps with a smearing parameter ρ = 0.12.
While the integrated autocorrelation times of both observables increase towards finer lattice spacings and are adequately described by a power law behavior, the increase is much steeper for the squared topological charge than for the smeared 2 × 2 Wilson loops. Below a crossover point at a ≈0.08, the autocorrelation times of the squared topological charge start to dominate. They can be described by both, a dynamical critical exponent z ≈ 5 or, alternatively, by an exponential increase, that was first suggested in <cit.>. This behavior is compatible with the observations in <cit.>.
In contrast, the autocorrelation time of Wilson loops is compatible with a much smaller exponent z ≈ 12. As can be seen in <Ref>, the critical exponent does not change significantly with the size of the Wilson loop after 31 stout smearing steps. Generally, the integrated autocorrelation times of smeared Wilson loops slightly increase both with the size of the loops and the number of smearing levels. The only exception to this behavior occurs for larger loops, where a few steps of smearing are required to obtain a clean signal and not measure the autocorrelation of the noise instead.
Regarding the different update algorithms, the unit length HMC does show a somewhat better scaling behavior for all observables than the local update algorithms, but it is also about a factor 7 more computationally expensive per update step (see <Ref>).[Since we are ultimately interested in dynamical fermion simulations, we do not consider the more efficient, local HMC variant presented in <cit.>, as it is applicable to pure gauge theories only.] For all local update algorithms considered here, the critical exponents are very similar, but the combination of one heat bath and four overrelaxation steps has the smallest prefactor. It is interesting to note, that this algorithm is also faster by more than a factor 2 than the five-step heat bath update scheme, which does not profit from the inclusion of overrelaxation steps. The single step heat bath without overrelaxation, although numerically cheaper, does have the worst prefactor of the local update algorithms.
Note that the reported numbers differ from those in <cit.> due to a different fit ansatz (in the proceedings, the fit ansatz included an additional constant term).
For the DBW2 action, the problem is more severe. <Ref> shows the time series of the topological charge for two runs using the 1HB+4OR and the 1HMC update scheme. Both simulations were done on a 16^4 lattice at β = 1.25 using the DBW2 action.
Evidently, both update schemes are unable to tunnel between different topological sectors in a reasonable time. Only a single configuration during the 1HB+4OR run and two (successive) configurations during the 1HMC run fulfill the condition Q_c > 0.5.
§.§ Scale setting for the DBW2 action
To the best of our knowledge, scales for the DBW2 action in pure gauge theory have only been computed based on simulations with β≤ 1.22 <cit.>, and interpolation formulas are only available based on data with β≤ 1.04 <cit.>. Since here we perform simulations at β = 1.25, we compute approximate values for t_0 <cit.> and w_0 <cit.>, which allows us to estimate our lattice spacings for comparison to the Wilson results. Both scales are based on the density E, which is defined as:
E = 1/4V∑_x ∈Λ F_μν^a(x) F_μν^a(x)
= -1/2V∑_x ∈Λ[F_μν(x) F_μν(x)]
Similar to the topological charge definitions, we adopt a plaquette- and clover-based definition of the field strength tensor, with the only difference being that the components are also made traceless, and not just anti-hermitian. The gradient flow scales t_0 and w_0 are both defined implicitly:
ℰ(t) = t^2 ⟨E ⟩ |_t = t_0 = 0.3
W(t) = t d/dt ℰ(t) |_t = w_0^2 = 0.3
The flow equation was integrated using the third-order commutator free Runge-Kutta scheme from <cit.> with a step size of ϵ = 0.025. Measurements of the clover-based energy density were performed every 10 integration steps, and t^2 ⟨ E(t) ⟩ was fitted with a cubic spline, which was evaluated with a step size of 0.001. For every value of β, two independent simulations with 100 measurements each were performed on 48 × 32^3 lattices. Every measurement was separated by 200 update sweeps with the previously described 1HB+4OR update scheme, and the initial 2000 updates were discarded as thermalization phase. Our results are displayed in <Ref>.
Using the physical values from <cit.>, these results imply a physical volume of approximately (0.95)^4 and a temperature of around 207 for the 16^4 lattice from the previous section.
In order to facilitate comparison with other results, we also provide an interpolation of our lattice spacing results. For this purpose, we use a rational fit ansatz with three fit parameters
log(t_0 / a^2) = 8 π^2/33β1 + d_1 / β + d_2 / β^2/1 + d_3 / β
that is asymptotically consistent with perturbation theory <cit.> and has a sufficient number of degrees of freedom to describe our data well. For our reference, clover-based t_0 scale setting, this results in a fit with χ^2 / d.o.f.≈ 1.31 and parameters d_1 ≈ 1.0351, d_2 ≈ -1.3763, d_3 ≈ 0.4058, which is displayed in <Ref>.
We want to emphasize that these results are not meant to be an attempt at a precise scale determination, but rather only serve as an approximate estimate. Especially for the finer lattices, the proper sampling of the topological sectors can not be guaranteed, and the comparatively small volumes may introduce non-negligible finite volume effects.
§ RESULTS WITH METADYNAMICS
<Ref> shows the time series of the topological charge from simulations with the HMC and the MetaD-HMC with five and ten stout smearing steps on a 22^4 lattice at β = 6.4035, using the Wilson gauge action.
Both MetaD-HMC runs tunnel multiple times between different topological sectors, whereas the conventional HMC essentially displays a single tunneling event between sectors Q = 0 and Q = 1. A noteworthy difference between the two MetaD-HMC runs is the increase of fluctuations with higher amounts of smearing. If too many smearing steps are used to define the CV, the resulting Q values will generically be closer to integer, so more simulation time is spent in the sector boundary regions. This will eventually drive the system to coarser regions of configuration space. Since these regions do not contribute significantly to expectation values in the path integral, it is desirable to minimize the time that the algorithm spends there. This is directly related to the issue of small effective sample sizes, which we will discuss in more detail in <Ref>.
A similar comparison for the DBW2 action can be seen in <Ref>. Here, two MetaD-HMC runs with four and five stout smearing steps on a 16^4 lattice at β = 1.25 are compared to the 1HMC and 1HB+4OR runs, which were already shown in <Ref>. Both conventional update schemes are confined to the zero sector, whereas the two MetaD-HMC runs explore topological sectors up to Q = 6. More quantitatively, the integrated autocorrelation time of Q_c^2 on the DBW2 stream is estimated to be τ_int(Q_c^2) = 2188 ± 478 for the MetaD-HMC algorithm with 4 smearing steps, whereas lower bounds for the autocorrelation times for the 1HMC and 1HB+4OR update schemes are 4e5, which implies a difference of more than two orders of magnitude.
To illustrate the role of the CV Q_meta, it may be helpful to compare the time series of Q_meta and Q_c, as shown in <Ref>.
The two observables are clearly correlated, but Q_meta is distributed more evenly between integers.
§.§ Computational overhead and multiple timescale integration
A fair comparison of the different update schemes also needs to take the computational cost of the algorithms into account. <Ref> shows the relative timings for the different update schemes used here, measured for simulations carried out on 16^4 lattices.
While the MetaD-HMC was not optimized for performance, it is still clear that the additional overhead introduced by the computation of the Metadynamics force contribution is significant for pure gauge theory. The relative overhead is especially large compared to local update algorithms, which are already more efficient than the regular HMC. Note, however, that due to its more non-local character, the relative loss in efficiency when switching to Metadynamics from either a local update algorithm or HMC, is already noticeably smaller for the DBW2 gauge action. Since the majority of the computational overhead comes from the Metadynamics force contribution, and the involved scales are different from those relevant for the gauge force, it seems natural to split the integration into multiple timescales in a similar fashion to the Sexton-Weingarten scheme <cit.>: The force contributions from the bias potential are correlated to the topological charge, which is an IR observable, whereas the gauge force is usually dominated by short-range, UV fluctuations. Therefore, it is conceivable that integrating the Metadynamics force contribution on a coarser timescale than the gauge force could significantly decrease the required computational effort, while still being sufficiently accurate to lead to reasonable acceptance rates.
We have attempted to use combinations of both the Leapfrog and the Omelyan-Mryglod-Folk second-order integrator with the Omelyan-Mryglod-Folk fourth-order minimum norm integrator. Unfortunately, we were unable to achieve a meaningful reduction of Metadynamics force evaluations without encountering integrator instabilities and deteriorating acceptance rates. However, this approach might still be helpful for simulations with dynamical fermions, where it is already common to split the forces into more than two levels.
Even if such a multiple timescale approach should prove to be unsuccessful in reducing the number of Metadynamics force evaluations, we expect the relative overhead of Metadynamics to be much smaller for simulations including dynamical fermions. In previous studies <cit.>, it was found that compared to conventional HMC simulations, simulations with Metadynamics and 20 steps of stout smearing were about three times slower in terms of real time.
§.§ Scaling of the reweighting factor and improvements to the bias potential
Due to the inclusion of the bias potential, expectation values with respect to the original, physical probability density are obtained by reweighting. As with any reweighting procedure, the overlap between the sampled distribution and the distribution of physical interest needs to be sufficiently large for the method to work properly. A common measure to quantify the efficiency of the reweighting procedure is the effective sample size (ESS), defined as
ESS = (∑_i w_i )^2 /∑_i w_i^2,
where w_i is the respective weight associated with each individual configuration. In the case of Metadynamics, this is simply e^V(Q_meta,i). We found the normalized ESS, i.e. the ESS divided by the total number of configurations, to generally be of order 𝒪(10^-2) or lower when simulating in regions of parameter space, where conventional algorithms fail to explore topological sectors other than Q = 0.
Although the low ESS ultimately results from the fact, that the bias potential is constructed in such a way as to have a marginal distribution over the CV that is flat, we can nonetheless distinguish two parts of this effect. On the one hand, there is the inevitable flattening of the intersector barriers by the bias potential, which is necessary to facilitate tunneling between adjacent topological sectors. On the other hand, however, the different weight of the different topological sectors is also cancelled by the bias potential. While it is necessary for a topology changing update algorithm to reproduce the intersector barriers faithfully, the leveling of the weights of the different topological sectors is entirely unwanted. It enhances the time that the simulation spends at large values of Q, so that these sectors are overrepresented compared to their true statistical weight. It is therefore conceivable, that by retaining only the intersector barrier part of the bias potential, the relative weights of the different topological sectors will be closer to their physical values, and the ESS will increase. In previous tests in 2-dimensional U(1) gauge theory, we found that the bias potentials could be described by a sum of a quadratic and multiple oscillating terms <cit.>:
V(Q) = A Q^2 + ∑_i = 1^N B_isin^2(π f_i Q)
Here, we fit our bias potentials, that are obtained from the 2-dimensional U(1) simulations, to this form. We then obtain a modified bias potential by subtracting the resulting quadratic term from the data. This modification of the bias potential is effective in reducing the oversampling of topological sectors with large |Q|, as evidenced by the larger normalized ESS in <Ref>. The resulting marginal distribution over the topological charge is then no longer expected to be constant, but rather resemble a parabola.
Here and in <Ref> of this work, we perform scaling tests of the proposed improvements in 2-dimensional U(1) gauge theory, where high statistics can be generated more easily than in 4-dimensional SU(3) gauge theory. The action is given by the standard Wilson plaquette action
S_g = β∑_n ∈Λ(1 - [P_t, x(n)] ),
and updates are performed with a single-hit Metropolis algorithm. The topological charge is defined using a geometric, integer-valued definition:
Q = 1/2π[∑_n ∈Λlog P_t, x(n) ]
For all Metadynamics updates, we use a field-theoretic definition of the topological charge that is generally not integer-valued:
Q_meta = 1/2π[∑_n ∈Λ P_t, x(n) ]
Since the charge distributions obtained from the two definitions already show reasonable agreement without any smearing for the parameters considered here, we can use local update algorithms and directly include the Metadynamics contribution in the staple. A similar idea that encourages tunneling in the Schwinger model by adding a small modification to the action was proposed in <cit.>.
<Ref> contains the relative ESS and integrated autocorrelation times for different lattice spacings on the same line of constant physics in 2-dimensional U(1) theory. We compare Metadynamics runs using bias potentials obtained directly from previous simulations with Metadynamics runs using potentials that were modified to retain the relative weights of the topological sectors as described above.
We see large improvements for both the ESS and τ_int in the modified case, even for the finest lattices considered.
We expect that the quadratic term is mostly relevant for small volumes and high temperatures. With larger volumes and lower temperatures, the slope should decrease, and with it the importance of correctly capturing this term. On the other hand, the oscillating term is expected to grow more important with finer lattice spacings, as the barriers between the different sectors grow steeper. Thus, the oscillating term needs to be described more and more accurately towards the continuum.
A standard technique to decrease, but not completely eliminate, action barriers is well-tempered Metadynamics <cit.>.In this approach, the height of the added Gaussians w decays with increasing potential. In our tests, we found that this method does increase the ESS, but at the cost of higher autocorrelation times to the point where any gains from the ESS that would be visible in the uncertainties of observables are nullified. Although it might still have some use in accelerating the build-up process or as a possible intermediate stream for PT-MetaD (see <Ref>), we decided not to explore this option further at this point.
§.§ Accelerating the equilibration/buildup of the bias potential
Another avenue of improvement is accelerating the build-up of the bias potential, for which we again explore two possible ideas. This aspect becomes especially relevant when considering large-scale simulations, where runs are often limited to 𝒪(10^4) update sweeps, and a lengthy buildup phase of the bias potential would render the method infeasible.
The first idea is to exploit the aforementioned well-tempered variant of Metadynamics, by choosing a larger starting value of the Gaussian height w and letting it decay slowly so as to minimize the change in the potential that arises from the decay. While this approach adds another fine-tunable parameter, namely the decay rate, we found that this did indeed significantly cut down on the number of update iterations required to thermalize the potential. A small caveat is, that in order to choose the optimal decay rate, one would have to have knowledge on the approximate height of the action barriers, which is not always the case.
A way of improving the build-up time without any prior knowledge of the bias potential is to use an enhancement of Metadynamics which is most commonly referred to as multiple walkers Metadynamics <cit.>, where the potential is simultaneously built up by several independent streams in a trivially parallelizable way. To add to this, we make each stream start in a distinct topological sector by the use of instanton configurations, which can easily be constructed in 2-dimensional U(1) gauge theory <cit.>. Namely, an instanton configuration with charge Q is given by
U_t^I(Q; t, x) = exp( -2 π i x Q_j/N_x N_t),
U_x^I(Q; t, x) = exp( 2 π i t Q_j/N_tδ_x, N_x).
The parallel and serial build are compared in <Ref> where the potential parameters for each stream are given by: Q_max/min = ±7, n_bins = 1400 and w = 0.002.
Since this method is an embarrassingly parallel task, we expect it to easily carry over to higher-dimensional, non-abelian theories with topological properties. In the case of 4-dimensional SU(3) the direct construction of instantons with higher charge is not quite as simple as in 2-dimensional U(1) gauge theory. The construction of lattice instantons with even charge is described in <cit.>, and lattice instantons with odd charge can be constructed by combining multiple instantons with charge Q = 1 <cit.>. Regardless, having exact instantons is not required, since we only need each stream to start in a sector, where it is then very likely to fall into the local minimum of the specified sector.
Independent of the possible improvements mentioned here, a fine-tuning of the standard Metadynamics parameters could also prove to be worthwhile in regard to accelerating the buildup and improving the quality of the bias potential.
§ COMBINING METADYNAMICS WITH PARALLEL TEMPERING
In order to eliminate the problem of small effective sample sizes observed in our Metadynamics simulations due to the required reweighting, we propose to combine Metadynamics with parallel tempering <cit.>. This is done in a spirit similar to the parallel tempering on a line defect proposed by Hasenbusch <cit.>. We introduce two simulation streams: One with a bias potential, and the other without it, while actions S(U) are the same for both streams. Note that since we are working in pure gauge theory, this means the second stream without bias potential can be updated with local update algorithms. After a fixed number of updates have been performed on the two streams, a swap of the two configurations is proposed and subject to a standard Metropolis accept-reject step, with the action difference given by
Δ S^M_t = [S^M_t(U_1) + S(U_2)] - [S^M_t(U_2) + S(U_1)]
= V_t(Q_meta,1) - V_t(Q_meta,2),
where the indices of the quantities denote the number of the stream and V_t is the bias potential in the first stream. It is apparent and important to note that the action difference is simple to compute regardless of what the physical action looks like. Even in simulations where dynamical fermions are present, the contributions from the physical action are always cancelled out by virtue of the two streams having the same action parameters; only the contribution from the Metadynamics bias potential remains.
Since the second stream samples configurations according to the (physical) target distribution, no reweighting is needed and thus the effective sample size is not reduced. Additionally, if the swaps are effective, this stream will inherit the topological sampling from the stream with bias potential and thus also sample topological sectors well. Effectively, the accept-reject step for swap proposals serves as a filter for configurations with vanishing statistical weight, thereby decreasing the statistical uncertainties on all observables weakly correlated to the topological charge. What remains to be seen is, whether the efficiency of the sampling of the topological sectors carries over from the Metadynamics stream to the measurement stream. In this section, we address this question both via a scaling test in 2-dimensional U(1) and with exploratory runs in 4-dimensional SU(3) with the DBW2 gauge action in a region where conventional update algorithms are effectively frozen.
§.§ Scaling tests in 2-dimensional U(1)
We carried out a number of simulations in 2-dimensional U(1) gauge theory for several lattice size and couplings with the same parameters as used in the test described in <Ref>. We use the potentials already build for these Metadynamics runs as static bias potentials in a number of parallel tempered Metadynamics runs. For each set of parameters, we carry out one run with the respective unmodified potential and one run with a potential modified as described in <Ref>. In these runs, swaps between the two streams were proposed after each had completed a single update sweep over all lattice sites. well as the resulting autocorrelation times of the topological charge Q can be found in <Ref>. To ensure that actual tunneling occurs, we also monitor the sum of the squared topological charges on both streams. This observable allows us to distinguish the fluctuations in Q originating from true tunneling events, mostly appearing in the stream with bias potential, from repeated swaps between the two streams without tunneling, which might also introduce a fluctuation of Q in the streams without actually overcoming any potential barriers.
<Ref> shows the scaling of the total amount of independent configuration, which is given by the quotient of the effective sample size <Ref> and the integrated autocorrelation time of the topological susceptibility. The performance of the standard Metropolis algorithm is compared to parallel tempered and standard Metadynamics, with both modified (see <Ref>) and non-modified bias potentials. Clearly, the parallel tempered Metadynamics update schemes perform best for small lattice spacings. Most importantly, the ratio of independent configurations in the sample seems to reach a plateau for finer lattice spacings, which is in stark contrast to conventional Metadynamics. It is also worth noting, that the modified bias potential provides better results than the non-modified one. This is consistent with our expectation, that large excursions in the topological charge, which produce irrelevant configurations, are curbed by the modified bias potential.
For a more detailed look at the effectiveness of the new algorithm, <Ref> compares the results of parallel tempered Metadynamics with those of standard Metadynamics at our finest lattice, with and without modification of the bias potential, and with the exact solutions <cit.>. First we note, that there is no significant difference in the performance between standard and parallel tempered Metadynamics in the topology related observables Q and Q^2, at least in the case of a modified bias potential. This is a very encouraging result, since the topological sampling of parallel tempered Metadynamics can not possibly exceed that of standard Metadynamics, as ultimately it is inherited from there. On the other hand, the inclusion of the irrelevant higher sectors with the unmodified bias potential does increase the error bars and there is some indication, that not all of the topological sector sampling is carried over into the measurement run of parallel tempered Metadynamics. Looking at an observable which is not related to topology, such as the plaquette, reveals that parallel tempered Metadynamics is superior to pure Metadynamics. This is clearly the effect of the better effective sample size and the larger number of independent configurations.
In summary, our scaling tests in 2-dimensional U(1) suggest, that parallel tempered Metadynamics with a modified bias potential has a much improved topological sampling, which seems to be almost equivalent to standard Metadynamics, while at the same time not suffering from a reduced effective sample size. There is some indication, that the ratio of independent to total configurations does reach a stable plateau in the continuum limit. These results encourage us to perform an exploratory study in pure SU(3) gauge theory in 4 dimensions.
§.§ First results in 4-dimensional SU(3)
For our exploratory study in 4-dimensional SU(3), we turn to the DBW2 gauge action at β=1.25 on a V=16^4 lattice, which we have already used in <Ref>. For our first run, which is depicted in the left panels of <Ref>, we have combined a local 1HB+4OR measurement run with a 4stout MetaD-HMC run that dynamically generates the bias potential. Between swap proposals, updates for the two streams are performed at a ratio of 10 (1HB+4OR) to 1 (MetaD-HMC), which roughly reflects the relative wall clock times between the algorithms. One can see that the measurement run starts exploring other topological sectors almost as soon as the parallel run with active bias potential has gained access to them.
In the later stages of the run, when the bias potential is sufficiently built up to allow the Metadynamics run to enter higher topological sectors, one can see that the swap rate is lowered by the action difference between the topological sectors, leading to an overall swap rate of ∼ 0.063. This effect mirrors the reduction of the effective sample size in pure Metadynamics updates and may be ameliorated by removing the quadratic term in the bias potential, as discussed in <Ref>. In fact, the relevant point is that the action difference between the maxima of the bias potential for different topological sectors reflects the relative weight of these sectors in the path integral and should not be flattened out. Ideally, we want the bias potential to only reproduce the barriers between the sectors, not their relative weights. For a second exploratory parallel Metadynamics run, we therefore opted for a static bias potential of this sort. Lacking data that are precise enough to model the bias potential in detail, as we did in 2-dimensional U(1), we started from the bias potential of a previous Metadynamics run and extracted the high frequency (in the CV) part of the topological barriers, while eliminating the long range part corresponding to the relative weight of the topological sectors. For this purpose, we chose to perform a singular spectrum analysis (SSA) <cit.> and crosschecked the result with a simple, piece-wise subtraction of the Q^2 term between consecutive local maxima. As displayed in <Ref>, both methods result in a similar modified bias potential that seems to reproduce the intersector barriers rather well.
The right panels of <Ref> display the results of the corresponding parallel tempered Metadynamics run. As one can see, large topological charge excursions of the Metadynamics run are now curbed, and the swap acceptance rate has increased to ∼0.25. In addition, the acceptance rate is approximately constant over the entire run, as it should be expected for a static bias potential. We would like to emphasize, that the bias potential we extracted is a rather rough guess. With a larger amount of data, it might be possible to extract a better bias potential, possibly leading to even better acceptance rates. Considering the rather simple ultimate form of the bias potential used, it might also be possible to model it with sufficient accuracy for a good initial guess at other run parameters. We plan to address these points in a future publication.
In any case, these first results clearly show that the parallel tempered Metadynamics algorithm is able to achieve enhanced topological sampling in 4-dimensional SU(3) without the reduction of the effective sample size that is typical for algorithms with a bias potential.
§ CONCLUSION AND OUTLOOK
In this paper, we have demonstrated that Metadynamics can be used to significantly reduce the integrated autocorrelation times of topological quantities in lattice simulations. In simulations of 4-dimensional SU(3) gauge theory with the DBW2 action, we have observed reductions of the autocorrelation times of more than two orders of magnitude. However, the direct application of Metadynamics is not entirely unproblematic: Compared to local update algorithms, there is a large computational overhead due to the costly Metadynamics force evaluations, and the reweighting procedure required to obtain unbiased expectation values can significantly reduce the effective sample size. In order to circumvent this reduction, we have proposed two improvements: The first consists of modifying the bias potential, so that all topological sectors are represented with their correct weight; the second is adding a dedicated measurement stream parallel to the Metadynamics run, which uses a conventional update algorithm. Periodically, swaps between the two streams are suggested and subject to an accept-reject step. The accept-reject step during swap proposals then effectively serves as a filter for configurations with low statistical weight. This parallel tempered Metadynamics algorithm, including both improvements, has been successfully applied to 4-dimensional SU(3) gauge theory. Furthermore, scaling tests in 2-dimensional U(1) gauge theory indicate gains of more than an order of magnitude compared to standard Metadynamics, and an improved scaling of autocorrelation times with the lattice spacing compared to standard update algorithms. Additionally, we have demonstrated that the buildup of the Metadynamics bias potential may be accelerated by running multiple Metadynamics simulations in parallel.
We believe these results are promising, and plan to study the scaling behavior of the methods tested here in more detail for 4-dimensional SU(3) gauge theory, and eventually in full QCD. Conceptually, there seem to be no obstacles for implementing parallel tempered Metadynamics in full QCD. We also plan to explore possible optimizations for parallel tempered Metadynamics. These include optimizing the bias potential via enhanced buildup and extraction and, possibly, describing it parametrically. Furthermore, it would be interesting to investigate whether adding intermediate runs to a parallel tempered Metadynamics stream could increase performance, despite the additional computational cost.
We thank Philip Rouenhoff for collaboration in early stages of this work. We gratefully acknowledge helpful discussions with Szabolcs Borsanyi, Stephan Dürr, Fabian Frech, Jana Günther, Ruben Kara, Andrey Kotov, and Kalman Szabo. Calculations were performed on a local PC cluster at the University of Wuppertal.
§ METADYNAMICS FORCE
In order to obtain an expression for <Ref>, the algebra-valued derivative of Q_meta with respect to the unsmeared links U_μ^(0) has to be calculated.
Here, we will only focus on the derivative of the clover-based topological charge Q_c with respect to a fully smeared gauge configuration U. For details of the stout-force recursion, we refer to <cit.>.
On the lattice, the following definition holds for a suitably defined lattice field strength tensor:
Q_c = 1/32 π^2∑_n ∈Λ[ϵ_μνρσ F_μν(n) F_ρσ(n)]
The lattice field strength tensor based on the clover term is defined as the sum of four plaquettes:
F_μν(n) = -i/8a^2(C_μν(n) - C_νμ(n) )
where the clover term in turn is defined via:
C_μν(n) = P_μ, ν(n) + P_ν, -μ(n)
+ P_-μ, -ν(n) + P_-ν, μ(n)
For notational purposes, we define the auxiliary variables R_μν(n) = C_μν(n) - C_νμ(n) and drop the specification of the lattice site n unless pertinent to the formula.
What we need for the force is the sum over all eight algebra directions:
T^a ∑_νρσ 4∂_n, α^aϵ_ανρσ[R_αν R_ρσ]
where the sum over a is implied.
Using the field strength tensor's symmetry properties, the derivative can be written as a term of the following form:
∑_νρσ∂_n, α^aϵ_ανρσ[R_αν R_ρσ]
= ∑_νρσϵ_ανρσ 2 [ T^a U_α(n) U_ν(n + α) U^†_α(n + ν) U^†_ν(n) R_ρσ(n)
- T^a U_α(n) U^†_ν(n + α - ν) U^†_α(n - ν) R_ρσ(n - ν) U_ν(n - ν)
- T^a U_α(n) U^†_ν(n + α - ν) R_ρσ(n + α - ν) U^†_α(n - ν) U_ν(n - ν)
+ T^a U_α(n) R_ρσ(n + α) U_ν(n + α) U^†_α(n + ν) U^†_ν(n)
- T^a U_α(n) U^†_ν(n + α - ν) U^†_α(n - ν) U_ν(n - ν) R_ρσ(n)
+ T^a U_α(n) U_ν(n + α) U^†_α(n + ν) R_ρσ(n + ν) U^†_ν(n)
- T^a U_α(n) R_ρσ(n + α) U^†_ν(n + α - ν) U^†_α(n - ν) U_ν(n - ν)
+ T^a U_α(n) U_ν(n + α) R_ρσ(n + α + ν) U^†_α(n + ν) U^†_ν(n) ]
= ∑_νρσϵ_ανρσ 2 [ T^a A_ανρσ]
= 2 [ T^a A_α]
An expression of the above form can be rewritten using the projector induced by the scalar product of the algebra:
T^a [T^a A_α] = -1/2A_α+ 1/6 [A_α]
Which in our case translates to:
T^a 2 [T^a A_α] = T^a [ T^a A_α+ (T^a A_α)^†]
= T^a [ T^a A_α- T^a A_α^†]
= -1/2(A_α- A_α^†)
+ 1/6[ A_α- A_α^†]
Including the factor we lost after defining R_μν, we obtain the derivative of the trace in <Ref>
∑_μνρσT^a ∂_n, α^aϵ_μνρσ[F_μν F_ρσ]
= ∑_μνρσ -1/64 T^a ∂_n, α^aϵ_μνρσ[R_μν R_ρσ]
= 1/32( (A_α - A_α^†)
- 1/3[ A_α - A_α^†] )
Summarized, the algebra-valued derivative of the clover-based topological charge with respect to the gauge link U_α(n) can be written as:
T^a ∂_n, α^a Q_c = ∑_μνρσ1/32π^2 T^a ∂_n, α^aϵ_μνρσ[F_μν F_ρσ]
= 1/1024π^2( (A_α - A_α^†)
- 1/3[ A_α - A_α^†] )
|
http://arxiv.org/abs/2307.04934v1 | 20230710230150 | $\mathcal{A}$-theory: A brane world-volume theory with manifest U-duality | [
"Machiko Hatsuda",
"Ondřej Hulík",
"William D. Linch",
"Warren D. Siegel",
"Di Wang",
"Yu-Ping Wang"
] | hep-th | [
"hep-th"
] |
./figures/
`@=11
addtoresetequationsection
.equation
`@=12
▹
|
http://arxiv.org/abs/2307.05573v1 | 20230710083608 | On the first bifurcation of Stokes waves | [
"Vladimir Kozlov"
] | math.AP | [
"math.AP"
] |
We consider Stokes water waves on the vorticity flow in a two-dimensional channel of finite depth. In the paper <cit.> it was proved existence of subharmonic bifurcations on a branch of Stokes waves. Such bifurcations occur near the first bifurcation in the set of Stokes waves. Moreover it is shown in that paper that the bifurcating solutions build a connected continuum containing large amplitude waves. This fact was proved under a certain assumption concerning the second eigenvalue of the Frechet derivative. In this paper we investigate this assumption and present explicit conditions when it is satisfied.
Optical-power-dependent splitting of magnetic resonance in nitrogen-vacancy centers in diamond
Kensuke Kobayashi
Received / Accepted
==============================================================================================
§ FORMULATION OF THE PROBLEM
Stokes and solitary waves were the main subject of study in the nonlinear water wave theory up to 1980.
In 1980 (see Chen <cit.> and Saffman <cit.>) it was discovered numerically and in 2000 (see <cit.>) this was supported theoretically for the ir-rotational case for a flow of infinite depth that there exist new types of periodic waves with several crests on the period (the Stokes wave has only one crest). These waves occur as a result of bifurcation on a branch of Stokes waves when they approach the wave of greatest amplitude.
In my papers <cit.> and <cit.> the existence of subharmonic bifurcations was proved on branches of Stokes waves on vorticity flow.
The main result in the latest paper <cit.> is proved under a certain assumption on the second eigenvalue of the Frechet derivative. The main goal of this paper is to study this assumption and to give an explicit conditions for its validity.
Consider steady surface waves in a two-dimensional channel bounded below by a flat,
rigid bottom and above by a free surface that does not touch the bottom. The surface tension is neglected and the water motion can be rotational.
In appropriate Cartesian coordinates (X, Y ), the bottom coincides with the
X-axis and gravity acts in the negative Y -direction. We choose the frame of reference so that the velocity field is time-independent as well as the free-surface profile
which is supposed to be the graph of Y = ξ(X), x ∈ R, where ξ is a positive and
continuous unknown function. Thus
𝒟=𝒟_ξ = {X∈ R, 0 <Y < ξ(X)}, 𝒮=𝒮_ξ={X∈ R, Y=ξ(X)}
is the water domain and the free surface respectively. We will use the stream function Ψ, which is connected with the velocity vector ( u, v) as u=-Ψ_Y and v=Ψ_X.
We assume that ξ is a positive, periodic function having period Λ>0 and that ξ is even and strongly monotonically decreasing on the interval (0,Λ/2). Since the surface tension is neglected, Ψ and
ξ after a certain scaling satisfy the following free-boundary problem (see for example <cit.>):
ΔΨ+ω(Ψ)=0 ,
1/2|∇Ψ|^2+ξ=R ,
Ψ=1 ,
Ψ=0 ,
where ω∈ C^1,α, α∈ (0,1), is a vorticity function and R is the Bernoulli constant. We assume that Ψ is even, Λ-periodic in X and
Ψ_Y>0 ,
which means that the flow is unidirectional.
The Frechet derivative for the problem is evaluated for example in <cit.>, <cit.>, and the corresponding eigenvalue problem for the Frechet derivative has the form
Δ w+ω'(Ψ)w+μ w=0 ,
∂_ν w-ρ w=0 ,
w=0 ,
where ν is the unite outward normal to Y=ξ(X) and
ρ=
ρ(X)=(1+Ψ_XΨ_XY+Ψ_YΨ_YY)/Ψ_Y(Ψ_X^2+Ψ_Y^2)^1/2|_Y=ξ(X).
The function w in (<ref>) is supposed also to be even and Λ-periodic.
Let us introduce several function spaces. Let α∈ (0,1) and k=0,1,…. The space C^k,α(𝒟) consists of bounded functions in 𝒟 such that the norms C^k,α(𝒟_a,a+1) are uniformly bounded with respect to a∈ R. Here
𝒟_a,a+1={(X,Y)∈𝒟, : a≤ x≤ a+1}.
The space C^k,α_0,Λ(𝒟) (C^k,α_0,Λ, e(𝒟)) consists of Λ-periodic (Λ-periodic and even) functions, which belong to C^k,α(𝒟) and vanish at Y=0.
Similarly we define the space C^k,α_Λ( R) (C^k,α_Λ, e( R)) consisting of functions in C^k,α( R), which are Λ-periodic (Λ-periodic and even).
We will consider a branch of Stokes water waves depending on a parameter t≥ 0, i.e.
ξ=ξ(X,t), ψ=ψ(X,Y;t), Λ=Λ(t).
For each t the functions ξ∈ C^2,α_Λ,e( R) and Ψ∈ C^3,α_Λ,e(𝒟). This branch starts from a uniform stream solution for t=0.
The dependence on t is analytic in the sense explained in Sect. <ref>. The definition of uniform stream solution together with the dispersion equation which is required for existence of the branch of the Stokes waves (<ref>) is given in the next section <ref>. Existence of such branches was a subject of many papers. In the case of non-zero vorticity we note a fundamental work <cit.>, where a bifurcation branches for the flow with vorticity was constructed for the first time. In the case with variable period we refer to the papers <cit.> and <cit.>.
The first (lowest eigenvalue of the problem (<ref>)] is always negative and simple and the second one we denote by μ(t).
Assume that
Assumption There exists t_0>0 such that μ(t)≥ 0 for t∈ (0,t_0) and μ(t)<0 for t∈ (t_0,t_0+ϵ) for a certain positive ϵ.
This assumption describes the first bifurcation point t_0 on the branch (<ref>) in the class of Stokes waves of period Λ(t). It is convenient to separate two types of bifurcations of branches of Stokes waves:
(i) in the class of Λ(t)-periodic solutions (Stokes bifurcation);
(ii) in the class of MΛ(t)-periodic solutions (M-subharmonic bifurcation);
Then the following theorem is proved in <cit.>.
Let Assumption be fulfilled. Then there exists an integer M_0 and pairs (t_M,M), where M is integer M>M_0 and t_M>t_0, satisfying
t_M→ t_0 M→∞,
such that t_M is M- subharmonic bifurcation point. There are no subharmonic bifurcations for t<t_0.
Moreover in Theorem 9.2, <cit.>, a structure of the set of bifurcating solutions is given. In particular it was shown that the bifurcating solutions build a connected continuum containing large amplitude waves.
The main aim of this paper is to give explicit conditions for validity of Assumption. Our analysis consists of two parts:
(i) analysis of behaviour of μ(t) for small t;
(ii) analysis of μ(t) for large positive t.
For t=0, Λ(0)=Λ_0 and μ(0)=0. Our first goal is to study the functions Λ(t) and μ(t) for small t. One of the results is the following. It's quite straightforward to show that these functions has the following asymptotic representations
μ (t)=μ_2t^2+0(t^3) Λ(t)=Λ_0+Λ_2t^2+O(t^3),
where Λ_0=Λ(0).
It is proved that
μ_2=CΛ_2
with a positive constant C to be evaluated later.
To prove formula (<ref>), first we study the function
λ(t)=Λ_0/Λ(t)=1+λ_2t^2+O(t^3)
and established the relation
-4λ_2τ_*^2∫_0^dγ(Y;τ_*)^2dY=μ_2∫_0^d γ(Y;τ_*)^2dY/Ψ_Y,
where γ(Y;τ) solves the problem (<ref>).
Since Λ_2=-λ_2Λ_0, the last relation implies (<ref>) with a positive constant C. Thus the sign of μ_2 is the same as of Λ_2 and opposite to the sign of λ_2.
In the irrotational case, i.e. ω=0, we study the dependence of μ_2 on the parameter θ>1 connected with the Froude number F=d_-^-3/2 by[It follows from (<ref>)]
θ=(1+√(1+8F^-2)/4)^3F^4=(F+√(F^2+8)/4)^3F,
where the right-hand side is monotone with respect to F.
We prove that
μ_2(θ)>0 θ_0≈ 2.48.
In terms of the Froude number the eigenvalue μ(t) is positive when
F<F_0, F_0≈ 1,511.
This give a condition for validity of the first part in Assumption.
Let us turn to the second part of the above assumption. It is enough to show an appearance of negative eigenvalues of the Frechet derivative when t→∞. According to Corollary 2.2, <cit.>, there exists a sequence
{t_j}, j=1,…, such that
a). ξ(0,t_j) tends to R when j→∞ (extreme wave) or
b). ξ(0,t_j) tends to a solitary wave as j→∞
In the case a) the limit configuration is the extreme wave with the angle 120^∘ at the crest (see <cit.>, <cit.>, <cit.> and <cit.>) and the appearance of negative eigenvalues follow from Theorem 3.1, <cit.> and <cit.>.
To show that the option b) is impossible we choose parameters of the problem such that solitary waves are excluded. We will do this by using known upper estimates for the Froude number of solitary waves.
The best known upper estimate for the Froude number of solitary wave, which follows from <cit.> (see also <cit.> and Introduction of <cit.>) is the following
F<√(2).
This means that if F>√(2) there are no solitary waves with such Froude number. Hence every global branch of Stokes waves must approach a Stokes waves of maximal amplitude which have the angle 120^∘ at the crest. According to Theorem 3.1 <cit.> this fact implies appearance of infinitely many negative eigenvalues of the Frechet derivative when t→∞. This implies the validity of the second part of Assumption. Therefore
1,414<F<1,511 .
Another upper estimate for the Froude number obtained numerically (see <cit.>, <cit.>, <cit.> and Introduction in <cit.>) is F<1,29. Hence
1,29<F<1,511 .
This estimate is supported now by numerics only, but we present it because it can be used for numerical study of subharmonic bifurcations.
§.§ Uniform stream solution, dispersion equation
The uniform stream solution Ψ=U(Y) with the constant depth η =d satisfies the problem
U^”+ω(U)=0 ,
U(0)=0, U(d)=1,
1/2U'(d)^2+d=R.
In order to find solutions to this problem we introduce a parameter s=U'(0). We assume that
s>s_0:=2max_τ∈ [0,1]Ω(τ), where
Ω(τ)=∫_0^τω(p)dp.
Then the problem (<ref>) has a solution (U,d) with a strongly monotone function U for
R=ℛ(s):=1/2s^2+d(s)-Ω(1).
The solution is given by
Y=∫_0^Udτ/√(s^2-2Ω(τ)), d=d(s)=∫_0^1dτ/√(s^2-2Ω(τ)).
If we consider (<ref>) as the equation with respect to s then it is solvable if R≥ R_c, where
R_c=min_s≥ s_0ℛ(s),
and it has two solutions if
R∈ (R_c,R_0), where
R_0=ℛ(s_0).
We denote by s_c the point where the minimum in (<ref>) is attained.
Existence of small amplitude Stokes waves is determined by the dispersion equation (see, for example, <cit.>). It is defined as follows.
The strong monotonicity of U guarantees that the problem
γ^”+ω'(U)γ-τ^2γ=0, γ(0,τ)=0, γ(d,τ)=1
has a unique solution γ=γ(y,τ) for each τ∈ R, which is even with respect to τ and depends analytically on τ.
Introduce the function
σ(τ)=κγ'(d,τ)-κ^-1+ω(1), κ=U'(d).
It depends also analytically on τ and it is strongly increasing with respect to τ>0. Moreover it is an even function.
The dispersion equation (see, for example <cit.>) is the following
σ(τ)=0.
It has a positive solution if
σ(0)<0.
By <cit.> this is equivalent to s+d'(s)<0 or what is the same
1<∫_0^ddY/U'^2(Y).
The right-hand side here is equal to 1/F^2 where F is the Froude number (see <cit.> and <cit.>). Therefore (<ref>) means that F<1, which is well-known condition for existence of Stokes waves of small amplitude.
Another equivalent formulation is given by requirement (see, for example <cit.>)
s∈ (s_0,s_c).
The existence of such s is guaranteed by R∈ (R_c,R_0). One more formula for the froude number is the following
1/F^2(s)=d'(s)/s,
where the Froude number F(s) corresponds to the uniform stream solution (U(Y;s),d(s)) and R=ℛ(s). One can verified directly from
(<ref>) that
(d'(s)/s)'>0.
Therefore
ℛ'(s)=s(1-F^-2(s))
and
1-F^-2(s)=1-d'(s)/s=(d'(s)/s)'(s_0-s)+O(s_0-s).
The value σ(0) admits the following representation (see [DispEqv]):
σ(0)=-3/2κℛ'(s)/d'(s)=3(F^2(s)-1)/2κ.
The function σ has the following asymptotic representation
σ(τ)=κτ +O(1)
and equation (<ref>) has a unique positive root, which will be denoted by τ_*. It is connected with Λ_0 by the relation
τ_*=2π/Λ_0.
To give another representation of the function σ we introduce
ρ_0=1+U'(d)U^”(d)/U'(d)^2
and note that
1+U'(d)U^”(d)/U'(d)^2=κ^-2-ω(1)/κ.
Hence another form for (<ref>) is
σ(τ)=κγ'(d,τ)-κρ_0.
The following problem will be used in asymptotic analysis of the branch (<ref>) for small t:
v^”+ω'(U)v-τ^2v=f ,
v'(d)-ρ_0v(d)=g v(0)=0.
Let τ≥ 0 and τ≠τ_*. Let also f∈ C^1,α([0,d]) and g be a constant. Then the problem
(<ref>) has a unique solution v∈ C^3,α.
If τ=τ_* then the problem (<ref>) has the one dimensional kernel which consists of function
cγ(Y;τ_*).
§ A CONNECTION BETWEEN THE FUNCTIONS Μ(T) AND Λ(T) FOR SMALL T
In this section we prove formula (<ref>). It appears that the partial hodograph transform is very useful for this purpose.
§.§ Partial hodograph transform
In what follows we will study branches of Stokes waves (Ψ(X,Y;t),ξ(X;t)) of period Λ(t), t≥ 0, started from the uniform stream at t=0. The existence of such branches is established in [ConStr] with fixed period but variable R and in [KL] for variable Λ and fixed R. In our case of variable Λ
it is convenient to make the following change of variables
x=λ X, y=Y, λ=Λ_0/Λ(t)
in order to deal with the problem with a fixed period. Here as before
Λ_0=Λ(0)=2π/τ_*,
where τ_* is the root of the equation (<ref>).
As the result we get
(λ^2∂_x^2+∂_y^2)ψ+ω(ψ)=0 ,
1/2(λ^2ψ_x^2+ψ_y^2)+η=R ,
ψ=1 ,
ψ=0 ,
where
ψ(x,y;t)=Ψ(λ^-1x,y;t) η(x;t)=ξ(λ^-1 x;t).
Here all functions have the same period Λ_0:=Λ(0), D_η and B_η are the domain and the free surface after the change of variables (<ref>).
From (<ref>) it follows that
ψ_y>0 .
Using the change of variables
q=x, p=ψ,
we get
q_x=1, q_y=0, p_x=ψ_x, p_y=ψ_y,
and
ψ_x=-h_q/h_p, ψ_y=1/h_p, dxdy=h_pdqdp.
System (<ref>) in the new variables takes the form
(1+λ^2h_q^2/2h_p^2+Ω(p))_p-λ^2(h_q/h_p)_q=0 ,
1+λ^2h_q^2/2h_p^2+h=R ,
h=0 .
Here
Q={(q,p) : q∈ R , p∈ (0,1)}.
The uniform stream solution corresponding to the solution U of (<ref>) is
H(p)=∫_0^pdτ/√(s^2-2Ω(τ)), s=U'(0)=H_p^-1(0).
One can check that
H_pp-H_p^3ω(p)=0
or equivalently
(1/2H_p^2)_p+ω(p)=0.
Moreover it satisfies the boundary conditions
1/2H_p^2(1)+H(1)=R, H(0)=0.
The Froude number in new variables can be written as
1/F^2=∫_0^1H_p^3dp.
Then according to Theorem 2.1, <cit.> there exists a branch of solutions to (<ref>)
h=h(q,p;t):[0,∞)→ C^2,γ_pe(Q), λ=λ(t):[0,∞)→ (0,∞),
which has a real analytic reparametrization locally around each t≥ 0.
§.§ Bifurcation equation
In order to find bifurcation points and bifuracating solutions we put h+w instead of h in (<ref>) and introduce the operators
ℱ(w;t)=(1+λ^2(h_q+w_q)^2/2(h_p+w_p)^2)_p
-(1+λ^2h_q^2/2h_p^2)_p
-λ^2(h_q+w_q/h_p+w_p)_q+λ^2(h_q/h_p)_q
and
𝒢(w;t)=1+λ^2(h_q+w_q)^2/2(h_p+w_p)^2-1+λ^2h_q^2/2h_p^2+w
acting on Λ_0-periodic, even functions w defined in Q. After some cancelations we get
ℱ=𝒥_p+ℐ_q, 𝒢=𝒥+w,
where
𝒥=𝒥(w;t)=λ^2h_p^2(2h_q+w_q)w_q-(2h_p+w_p)(1+λ^2h_q^2)w_p/2h_p^2(h_p+w_p)^2
and
ℐ=ℐ(w;t)=-λ^2h_pw_q-h_qw_p/h_p(h_p+w_p).
Both these functions are well defined for small w_p.
Then the problem for finding solutions close to h is the following
ℱ(w;t)=0
𝒢(w;t)=0
w=0 .
Furthermore, the Frechet derivative (the linear approximation of the functions ℱ and 𝒢) is the following
Aw=A(t)w=(λ^2h_qw_q/h_p^2-(1+λ^2h_q^2)w_p/h_p^3)_p-λ^2(w_q/h_p-h_qw_p/h_p^2)_q
and
𝒩w=𝒩(t)w=(N w-w)|_p=1,
where
N w=N(t)w=(-λ^2h_qw_q/h_p^2+(1+λ^2h_q^2)w_p/h_p^3)|_p=1.
The eigenvalue problem for the Frechet derivative, which is important for the analysis of bifurcations of the problem
(<ref>), is the following
A(t)w=μ w ,
𝒩(t)w=0 ,
w=0 .
For t=0 and μ=0 this problem becomes
A_0w:=-(w_p/H_p^3)_p-(w_q/H_p)_q=0 ,
B_0w:=-w_p/H_p^3+w=0 ,
w=0 .
Since the function H depends only on p this problem admits the separation of variables and its solutions are among the functions
v(q,p)=α(p)cos (τ q), τ=kτ_*, k=0,1,….
According to <cit.> the function (<ref>) solves (<ref>) if and only if
α(p)=γ(H(p);τ)H_p,
where the function γ(Y;τ) solves the euation (<ref>) and σ(τ)=0. Therefore if τ≠τ_* then the problem (<ref>) has no non-trivial solutions. If τ=τ_* then
the kernel of the above operator is one dimensional in the class of Λ_*:=2π/τ_* periodic, even function and it is given by
v=α(p)cos(τ_*q), α(p)=γ(H(p);τ_*)H_p.
We will need also the problem
-(u_p/H_p^3)_p+τ^2u/H_p=F
u(0)=0, -u_p/H_p^3+u=c ,
where F∈ C^0,α([0,1]) and c is a constant.
Clearly this problem is
elliptic and uniquely solvable for all τ≥ 0, τ≠τ_*, the problem (<ref>) has a unique solution in C^2,α([0,1]). This solution is given by
u(p)=v(H(p))H_p(p),
where v(Y) solves the problem (<ref>) with f=F(H(y)) and g=c.
§.§ Stokes waves for small t
Here we consider asymptotics of solutions of (<ref>) for small t. For this purpose we take
h=H(p)
and represent the solution in the form
H(p)+w(q,p,t), w=tv,
where
v(q,p;t)=v_0(q,p)+tv_1(q,p)+t^2v_2(q,p)+⋯
The function λ=λ(t) is sought in the form
λ(t)=1+λ_2t^2+O(t^4).
The coefficients λ_1 and λ_3 in the above formula are zero as one can easily see from the forthcoming calculations.
Our aim is to find Stokes waves close to H. Since the functions w, v and λ analytically depend on t it is sufficient to find coefficients v_j and λ_j.
In this case
𝒥=A_1(1+w_p/H_p)^-2+A_2(1+w_p/H_p)^-2,
where
A_1=-w_p/H_p^3
and
A_2=λ^2w_q^2/2H_p^2-w_p^2/2H_p^4.
Therefore
𝒥=𝒥_1+𝒥_2+𝒥_3+O(t^4),
where
𝒥_1=A_1,
𝒥_2=A_2-2w_p/H_pA_1=λ^2w_q^2/2H_p^2+3/2w_p^2/H_p^4
and
𝒥_3=3w_p^2/H_p^2A_1-2w_p/H_pA_2=-2w_p^3/H_p^5-w_pw_q^2/H_p^3.
Furthermore
ℐ=-λ^2w_q/H_p(1+w_p/H_p)^-1
=ℐ_1+ℐ_2+ℐ_3+O(t^4).
Here
ℐ_1=-λ^2w_q/H_p, ℐ_2=λ^2w_qw_p/H^2_p, ℐ_3(w)=-λ^2w_qw_p^2/H^3_p.
Inserting (<ref>) and (<ref>) into (<ref>) and
equating terms of the same power with respect to t, we get
Av_0:=-(v_0p/H_p^3)_p-(v_0q/H_p)_q=0 ,
Bv_0:=-v_0p/H_p^3+v_0=0 ,
v_0=0 .
As we have shown in previous section the kernel of the above operator is one dimensional and is generated by the function
v_0=α_0(p)cos(τ_*q), α_0=γ(H(p);τ_*)H_p.
The next term in the asymptotics satisfies the boundary value problem
Av_1+(v_0q^2/2H_p^2+3/2v_0p^2/H_p^4)_p+(v_0qv_0p/H^2_p)_q=0 ,
Bv_1+v_0q^2/2H_p^2+3/2v_0p^2/H_p^4=0 ,
v_1=0 .
The solution of this problem, orthogonal to v_0 in L^2, is given by
v_1=α_1(p)+β_1(p)cos(2τ_* q),
where α_1 and β_1 satisfy the problem (<ref>9 with τ=0 and τ=2τ_* respectively with certain right-hand sides.
Further, the term v_2 is fond from the following problem
Av_2+(v_0qv_1q/H_p^2+3v_0pv_1p/H_p^4+𝒥_3(v_0))_p
+(v_1qv_0p+v_0qv_1p/H^2_p+ℐ_3(v_0))_q=2λ_2(v_0q/H_p)_q ,
Bv_2+v_0qv_1q/H_p^2+3v_0pv_1p/H_p^4+𝒥_3(v_0)=0
v_2(q,0)=0.
The solvability condition for the last problem has the form
2λ_2∫_Ωv_0q^2/H_pdqdp-∫_Ω((v_0qv_1q/H_p^2+3v_0pv_1q/H_p^4)v_0p+v_0qv_1p+v_1qv_0p/H^2_pv_0q)dqdp
+∫_Ω((2v_0p^3/H_p^5+v_0pv_0q^2/H_p^3)v_0p+v_0p^2v_0q/H_p^3v_0q)dqdp=0.
This relation can be used to find λ_2. It is quite difficult to find the sign of λ_2 from this relation but it implies a continuity of λ_2 on R and ω. The function v_2 has the form
v_2=α_2(p)cos(τ_* q)+β_2(p)cos(3τ_* q),
where α_2 and β_2 satisfy the problem (<ref>) with τ=τ_* and τ=3τ_* respectively with certain right-hand sides.
Thus we have shown that λ and v have the form (<ref>) and (<ref>) respectively. More exactly v_0 is given by (<ref>), v_1 is represented as (<ref>) and v_2 by (<ref>).
§.§ Formula for λ_2 and the proof of the relation (<ref>)
Using the representation (<ref>), (<ref>) with h=H+w, where w is evaluated in the previous section, we can write the Frechet derivative of the operators 𝒥 U and ℐ U in the form
d𝒥(U)=-U_p/H_p^3+(w_qU_q/H_p^2+3w_pU_p/H_p^4)-6w_p^2U_p/H_p^5
-w_q^2U_p+2w_pw_qU_q/H_p^3+O(t^3)
and
dℐ(U)=-λ^2U_q/H_p+w_pU_q+w_qU_p/H_p^2-w_p^2U_q+2w_qw_pU_p/H_p^3+O(t^3).
The eigenvalue problem is described by the boundary value problem
(d𝒥(U))_p+(dℐ(U))_q=(μ_2t^2+O(t^3))U
d𝒥(U)+U=0
U=0
We are looking for the eigenfunction U in the form
U=U(q,;pt)=U_0(q,p)+tU_1(q,p)+t^2U_2(q,p)+O(t^3), U_0=v_0.
Equating terms of the same order with respect to t, we get
AU_1+(v_0qU_0q/H_p^2+3v_0pU_0p/H_p^4)_p+(w_0pU_0q+v_0qU_0p/H_p^2)_q=0 ,
BU_1+(v_0qU_0q/H_p^2+3v_0pU_0p/H_p^4)=0 ,
U_1=0 .
Comparing this problem with (<ref>) and using that U_0=v_0, we conclude that U_1=2v_1.
Next, we write the equation for U_2
-2λ_2(U_0q/H_p)_q+AU_2+(v_1qU_0q+v_0qU_1q/H_p^2+3v_1pU_0p+v_0pU_1p/H_p^4)_p
+(v_1pU_0q+v_1qU_0p+v_0pU_1q+v_0qU_1p/H_p^2)_q
-(6v_0p^2U_0p/H_p^5
+v_0q^2U_0p+2v_0pv_0qU_0q/H_p^3)_p-(v_0p^2U_0q+2v_0qv_0pU_0p/H_p^3)_q=μ_2U_0
and the boundary equations U_2=0 for p=0 and
BU_2+(v_1qU_0q+v_0qU_1q/H_p^2+3v_1pU_0p+v_0pU_1p/H_p^4)
-(6v_0p^2U_0p/H_p^5+v_0q^2U_0p+2v_0pv_0qU_0q/H_p^3)=0
Since U_0=v_0 and U_1=2v_1, the solvability condition for (<ref>) has the form
2λ_2∫_Q_pv_0q^2/H_pdqdp-3∫_Q_p(v_1qv_0q/H_p^2+3v_0pv_1q/H_p^4)v_0pdqdp
-3∫_Q_pv_1pv_0q+v_1qv_0p/H_p^2v_0qdqdp+3(∫_Q_p(2v_0p^3/H_p^5+v_0q^2v_0p/H_p^3)v_0p+v_0p^2v_0q^2/H_p^3)dqdp
=μ_2∫_Q_p v_0^2dqdp.
Taking the sum of (<ref>) and (<ref>) with the factor -3, we get
-4λ_2∫_Ωv_0q^2/H_pdqdp=μ_2∫_Ω v_0^2dqdp,
which coincides with (<ref>).
§ THE COEFFICIENT Λ_2 FOR THE IRROTATIONAL FLOW
In this section we evaluate the coefficient λ_2
in the case ω=0. The problem (<ref>) is solvable if R≥ R_c, where R_c=3/2. If R>R_c then the equation
1/d^2+2d=2R
has exactly two solutions 0<d_-<1<d_+ which are called supercritical and subcritical, respectively. The Stokes branches appears only for the stream solutions- (Y/d_+,d_+).
We will make the following change of variables
X=x/d_+, Y=y/d_+-1, ξ(X)=η(x)/d_+-1, Ψ(X,Y)=ψ(x,y).
Then the problem (<ref>) takes the form
Δ_x,yψ=0 ,
|∇_x,yψ|^2+2θη=1 ,
ψ=1 ,
ψ=0 ,
where
θ=d_+^3.
So θ is the only parameter in the problem and θ∈ (1,∞).
In the irrotational case one can derive an explicit equation for λ_2(θ). This derivation is based on the application of the integral Byatt-Smith equation, see <cit.>.
§.§ Hodograph transformation
Let x + iy→ϕ + iψ be a conformal mapping of
D={(x,y) : x∈ R, -1<y<η(x)}
onto R × (0, 1).
Now we apply the hodograph transform, that is, use the imaginary part y(ϕ,ψ)
of the inverse conformal mapping as the unknown function instead of the stream
function ψ and the potential ϕ. From problem (<ref>) we get the following one:
y_ϕϕ+y_ψψ=0, (ϕ,ψ)∈ R×(0,1);
y=-1, ψ=0,ϕ∈ R;
y=η, ψ=1,ϕ∈ R;
(y_ϕ^2+y_ψ^2)^-1+2θ y=1, ψ=1, ϕ∈ R.
Let us eliminate y in order to obtain an equation that contains only η. It is clear
that relations (24) and (25) yield
y_ψ(ϕ,1)=(1/1-2θη(φ)-η_ϕ^2(ϕ))^1/2
Here and below we write η(ϕ) instead of η(x(ϕ, 1)) and hope that this will not cause
confusion. The Dirichlet-to-Neumann operator in the left-hand side of formula (26)
can be expressed by virtue of the Fourier transform
y(τ,ψ)=∫_-∞^∞ y(ϕ,ψ)e^iτϕdϕ.
In order to solve
the Dirichlet problem (22)–(24) we define the operator N by
Nf(ξ)=ν(ξ) f(ξ), ν(ξ)=ξξ .
The important property of this operator is
N(cos(τϕ))=ν(τ)cos(τϕ) .
Let
ℱ(u,v)=v^2H_1(u,v)-u^2H_0(u),
where
H_0(u)=2θ^2[2+S(u)]/S(u)[1+S(u)]^2, H_1(u,v)=S(u)/1+√(1-v^2S^2(u))
and S(u)=√(1-2θ u). Then equation for η=η(ϕ) has the form
(θ I-N)η=ℱ(η,η_ϕ).
Here I is the identity operator. This equation coincides with that of Byatt-Smith <cit.> up to some algebraic manipulations. It also used in
<cit.> and <cit.>, where various properties of this equation can be found.
Equation (<ref>) is valid for all solutions with arbitrary period. To fix period we make the change of the variable
φ=λϕ, λ=Λ_0/Λ.
Then equation (<ref>) becomes
(θ I-λ N)η=ℱ(η,λη_ϕ).
We are looking for a solution to (<ref>) in the form
η(φ)=t(η_1+tη_2+t^2η_3+…)
=t(cos(τ_*φ)+t(a_0+a_1cos(2τ_*φ))+t^2(a_2cos(τ_*φ)+...)+⋯)
and
λ=1+λ_2t^2+⋯,
where τ_* is the root of the equation
ττ =θ.
Using that
H_0(u)=θ^2/2(3+5θ u)+O(u^2) H_1(u,v)=1/2(1-θ u)+O(u^2+v^2)
we can solve (<ref>) asymptotically
(θ I- N)η_2=1/2η_1φ^2-3θ^2/2η_1^2
and
(θ I-N)η_3-λ_2Nη_1=1/22η_1φη_2φ-3θ^2/22η_1η_2-
θ^2/25θη_1^3-1/2θη_1η_1φ^2.
From (<ref>) it follows
(θ-1)a_0=τ_*^2-3θ^2/4, (ν(2τ_*)-θ)a_1=3θ^2+τ_*^2/4.
Using the relations
cos Acos B=1/2(cos(A+B)+cos(A-B)), sin Asin B=1/2(cos(A-B)-cos(A+B)),
and
equating in (<ref>) coefficients in cos(τ_*φ), we obtain
-λ_2ν(τ_*)=τ_*^2a_1-3θ^2a_0-3θ^2/2a_1-15θ^3/8-τ_*^2θ/8=:f(θ).
§.§ Sign of λ_2
Since ν(τ_*)=θ we have τ_*<θ. One can check that the function ν(ξ) is convex and hence
θ=ν(τ_*)<1/2(1+ν(2τ_*)) θ-1<ν(2τ_*)-θ.
In the case θ≫ 1 we have
τ_*≈θ, a_0≈-θ/2, a_1≈θ, ν(τ_*)=θ, ν(2τ_*)≈ 2τ_*,
and hence
λ_2≈θ^2.
If we assume that θ=1+ϵ where ϵ is a small positive number then we get
ν(τ)=1+τ^2/2+⋯, τ_*=√(2ϵ), ν(2τ_*)=1+4ϵ,
a_0=-3/4ϵ, a_1=3/16ϵ
and
λ_2=-9/4ϵ7/8.
Evaluating the root θ_0 of the equation f(θ)=0 we get
θ_0≈ 2.479.
Therefore if θ∈ (1,θ_0 then
Λ_2>0.
According to (<ref>) and (<ref>), we conclude that
μ_2>0 .
§.§ Upper estimates of the Froude number
The following relation connected d_-and d_+ can be found in Sect. 2.1, <cit.> (see the formula (14) there):
d_+/d_-=1+√(1+8d_-^3)/4d_-^3,
which implies
d_+=1+√(1+8d_-^3)/4d_-^2.
A necessary condition for existence of solitary wave is the lower estimate F>1, Therefore the depth d=d_- corresponds to solitary waves and the corresponding Froude number is
F=d_-^-3/2.
The best known upper estimate for the Froude number can be derived from <cit.> as it is explained in Introduction of <cit.> and it is given by (<ref>).
Since the function
x→1+√(1+8x^3)/4x^2
is strongly decreasing we conclude that the condition F^2=d_-^-3>2 which implies non-existence of solitary waves, is equivalent to θ=d_+^3>1,745.
Another numerical estimate F<1,29 is obtained in <cit.>. Both these estimates together with (<ref>) lead to relations (<ref>) and (<ref>).
§.§ On the validity of Assumption A
As before we assume here that ω=0. Consider the branch (<ref>) of Stokes waves which starts from a uniform stream solution. According to <cit.> the limit behaviour of this branch is reduced to one of the following options: the branch approches a solitary wave or it approches an extreme wave. If we assume that F>√(2) then the first option is impossible due to the estimate (<ref>). Therefore in this case the bransh is approaching an extreme wave, which has the angle 120^∘ at the crest. By <cit.> and Theorem 3.1, <cit.>
the number of negative eigenvalues of the Frechet derivative becomes more and more when t approaches infinity. As a result we arrive at (<ref>). Similarly if the numerical estimate F<1,29 is excepted then we arrive at the interval (<ref>) where the Assumption A is valid. Certainly both conditions (<ref>) and (<ref>) are sufficient for the validity of the Assumption and this problem requires further research.
§ ACKNOWLEDGMENTS
I want to thank M. Wheeler for fruitful discussions on the estimates of the Froude number.
§ REFERENCES
20
Am J Amick, Bounds for water waves,
rch. Ration. Mech. Anal., 99, pp. 91–114 1987.
T2 CJ Amick, LE Fraenkel, JF Toland, On the Stokes conjecture for the wave of extreme form,
Acta Mathematica 148 (1), 1982.
BS J.G.B. Byatt-Smith, An exact integral equation for steady surface waves, Proc. Roy. Soc. Lond. A 315 (1970) 405–418.
BDT1 B Buffoni, EN Dancer, JF Toland, The Regularity and Local Bifurcation of Steady Periodic Water Waves,
Archive for rational mechanics and analysis 152 (3), 207-240, 2000.
BDT2 B Buffoni, EN Dancer, JF Toland, The sub-harmonic bifurcation of Stokes waves,
Archive for rational mechanics and analysis 152 (3), 241-271, 2000.
Che Chen, B. and Saffman, P.G. Numerical evidence for the existence of new types of gravity
waves on deep water. Stud. Appl. Math. 62, 1980.
CSst A Constantin, W Strauss, Exact steady periodic water waves with vorticity,
Communications on Pure and Applied Mathematics 57 (4), 481-527, 2004.
HVB83 J. K. Hunter and Jean-Marc Vanden-Broeck. Accurate computations for steep solitary
waves. Journal of fluid Mechanics, 136:63–71, 1983.
KP74 G. Keady and W. G. Pritchard. Bounds for surface solitary waves. Proc. Cambridge Philos. Soc.,
76:345–358, 1974.
Koz1 V. Kozlov, The subharmonic bifurcation of Stokes waves on vorticity flow, JDE, 2023, arXiv:2204.10699.
Koz1a V.Kozlov, On first subharmonic bifurcations in a branch of Stokes waves, arXiv:2303.11440, 2023.
KN2008 V Kozlov, N Kuznetsov, On behaviour of free-surface profiles for bounded steady water waves,
Journal de mathématiques pures et appliquées 90 (1), 1-14, 2008.
KN14 V Kozlov, N Kuznetsov, Dispersion equation for water waves with vorticity and Stokes waves on flows with counter-currents,
Archive for Rational Mechanics and Analysis 214 (3), 971-1018, 2014.
KN11a V Kozlov, N Kuznetsov, The Benjamin–Lighthill conjecture for near-critical values of Bernoulli’s constant,
Archive for rational mechanics and analysis 197, 433-488, 2010.
KL1 V Kozlov, E Lokharu, Global bifurcation and highest waves on water of finite depth,
arXiv preprint arXiv:2010.14156, 2020.
KL2 V Kozlov, E Lokharu, On negative eigenvalues of the spectral problem for water waves of highest amplitude,
Journal of Differential Equations, 342, 239-281, 2023.
KL3 V Kozlov, E Lokharu, On Rotational Waves of Limit Amplitude,
Functional Analysis and Its Applications 55 (2), 165-169, 2021.
KLW V Kozlov, E Lokharu, MH Wheeler, Nonexistence of subcritical solitary waves,
Archive for Rational Mechanics and Analysis, 241 (1), 535-552, 2021.
LHF74 M. S. Longuet-Higgins and J. D. Fenton. On the mass, momentum, energy and circulation
of a solitary wave. II. Proc. Roy. Soc. (London) Ser. A, 340:471–493, 1974.
McL J. B. McLeod, The Stokes and Krasovskii conjectures for the wave of greatest height, Studies in Applied
Mathematics, 98 (1997), pp. 311-333.
Mil80 John W. Miles. Solitary waves. In Annual review of fluid mechanics, Vol. 12, pages
11–43. Annual Reviews, Palo Alto, Calif., 1980.
P2 PI Plotnikov, A proof of the Stokes conjecture in the theory of surface waves,
Studies in Applied Mathematics, 108 (2), 2002.
Sa Saffman, P.G. Long wavelength bifurcation of gravity waves on deep water J. Fluid Mech.
101, 1980.
Star Victor P. Starr. Momentum and energy integrals for gravity waves of finite height. J. Mar. Res., 6:175–
193, 1947.
arXiv preprint arXiv:2204.10071.
VW1 E Varvaruca, GS Weiss, A geometric approach to generalized Stokes conjectures,
Acta mathematica 206 (2), 363-403, 2011.
We M. Wheeler, The Froude number for solitary water waves with vorticity,
Journal of Fluid Mechanics 768, 91-112, 2015.
§.§ Small τ_*
Here we assume that 0<t≪τ_*≪ 1. The the relation (<ref>) for finding v_1 has the form
Av_1+3/2(v_0p^2/H_p^4)_p=O(τ_*^2) ,
Bv_1+3/2v_0p^2/H_p^4=O(τ_*^2) ,
v_1=0 .
We are looking for the solution in the form
v_1(q,p)=a_1(p)cos^2(τ_*q).
Then a_1 satisfies the equation
a_1p=3/2α_0p^2/H_p+H_p^3c_1,
where c_1 is a constant. Integrating this relation from 0 to p we get
a_1(p)=∫_0^p(3/2α_0p^2/H_s+H_s^3c_1)ds.
From the boundry condition for p=1 we get
a_1(1)-c_1=0.
Therefore
c_1(1-∫_0^1H_s^3ds)=∫_0^13/2α_0p^2/H_sds.
Since F<1 and the left hand side is equal to c_1(1-F^-2) the coefficient c_1 is negative and
c_1=(1-F^-2)^-1∫_0^13/2α_0p^2/H_sds.
Now we turn to the next term v_2. The problem for v_2 is the following
Av_2+(3v_0pv_1p/H_p^4+𝒥_3(v_0))_p=2λ_2(v_0q/H_p)_q+O(τ_*^2) ,
Bv_2+3v_0pv_1p/H_p^4+𝒥_3(v_0)=O(τ_*^2)
v_2(q,0)=0.
It is solvable if
2λ_2τ_*^2∫_Q_pv_0^2/H_pdqdp=∫_Q_p(3v_0pv_1p/H_p^4+𝒥_3(v_0))v_0pdqdp.
Since ??? we have
2λ_2τ_*^2∫_0^1α_0^2/H_pdp=-c_1∫_0^13α_0p^2/H_pdp+O(1).
|
http://arxiv.org/abs/2307.05899v1 | 20230712041108 | DiffuseGAE: Controllable and High-fidelity Image Manipulation from Disentangled Representation | [
"Yipeng Leng",
"Qiangjuan Huang",
"Zhiyuan Wang",
"Yangyang Liu",
"Haoyu Zhang"
] | cs.CV | [
"cs.CV"
] |
National Innovation Institute of Defense Technology
Corresponding Author.
Diffusion probabilistic models (DPMs) have shown remarkable results on various image synthesis tasks such as text-to-image generation and image inpainting. However, compared to other generative methods like VAEs and GANs, DPMs lack a low-dimensional, interpretable, and well-decoupled latent code. Recently, diffusion autoencoders (Diff-AE) were proposed to explore the potential of DPMs for representation learning via autoencoding. Diff-AE provides an accessible latent space that exhibits remarkable interpretability, allowing us to manipulate image attributes based on latent codes from the space. However, previous works are not generic as they only operated on a few limited attributes. To further explore the latent space of Diff-AE and achieve a generic editing pipeline, we proposed a module called Group-supervised AutoEncoder(dubbed GAE) for Diff-AE to achieve better disentanglement on the latent code. Our proposed GAE has trained via an attribute-swap strategy to acquire the latent codes for multi-attribute image manipulation based on examples. We empirically demonstrate that our method enables multiple-attributes manipulation and achieves convincing sample quality and attribute alignments, while significantly reducing computational requirements compared to pixel-based approaches for representational decoupling. Code will be released soon.
§ INTRODUCTION
Score-based generative models (SGMs) <cit.> and Diffusion probabilistic model (DPMs) <cit.> have gained great attention for its training stability, scalability, and impressive image synthesis quality. These models have been shown to achieve impressive performance on diverse domains spanning computer vision <cit.>, natural language processing <cit.>, multi-model modeling <cit.>. Nevertheless, the diffusion model has not been well investigated as of yet.
The approach of unsupervised representation learning via generative modeling is currently a highly hot issue. Some latent variable generative models, like GANs and VAEs, are inherently candidates for them. We can easily get interpretable latent codes in the process of generative modeling. But in diffusion models, we only get a sequence of pixel-level noise and intermediate reconstructed images which lack high-level semantic information. In light of this, Diff-AE <cit.> is an insightful investigation of how to use conditional diffusion model for representation learning via autoencoding. Specifically, they use a traditional
CNN encoder E_ϕ for digging meaningful representations z_sem from images, then employ a conditional DPM as the decoder D_θ for image reconstruction, taking latent code z_sem∈𝒵 as input of it. Following Ho et al. <cit.>, training is achieved by optimizing ℒ_simple loss function with respect to ϕ and θ.
ℒ_simple:=𝔼_t, 𝐱_0, ϵ[ϵ-ϵ_θ(√(α̅_t)𝐱_0+√(1-α̅_t)ϵ, t, z_sem)^2_2]
where ϵ∈ℝ ^ 3× h × w∼𝒩 (0, I), 𝐱_0 is input image, z=E_ϕ(𝐱_0), α̅_t is a hyperparameter. Diff-AE is competitive with state-of-the-art models on various datasets' benchmarks and downstream tasks. More importantly, we can utilize its semantic-rich latent space to perform image synthesis.
Latent variable generative models, such as VAEs and GANs, are imposed independent and identical constraints on their prior distribution, making them inherently decoupled.
Yet, Diff-AE is not bounded by such restraints, there is still significant promise in the realm of controllable image synthesis, like multi-attribute image manipulation and precise image inpainting.
To address the aforementioned issues, we further explore the disentanglement capability of Diff-AE semantic space 𝒵.
We can obtain a more meaningful and readily controllable latent code z_sem∈𝒲 and utilize it to perform image synthesis.
𝒵 and 𝒲 respectively denote latent space obtained by initial decoupling and further decoupling.
To this end, we propose a novel decoupled representation learning framework called group-supervised diffusion autoencoder (dubbed DiffuseGAE) for zero-shot image synthesis by recombination of multi-attribute latent code.
As shown in Fig. <ref>, DiffuseGAE consists of a diffusion autoencoder and a novel autoencoder called group-supervised autoencoder(GAE).
It is known that diffusion models are often criticized for their long generation time, so we adopt conditional Denoising Diffusion Implicit Model(DDIM) which can be transformed into deterministic Diffusion. It is the first-order case of DPM-solver, a fast sampling method that makes full use of the semi-linearity of diffusion ODEs. Under these circumstances, we can easily transfer it into a more efficient method.
In addition, the determinism of DDIM allows us to obtain a consistent mapping between the image and the noise, which guarantees the quality of image reconstruction.
Following the paradigm of Diff-AE <cit.>, a diffusion autoencoder composed of a CNN encoder and the aforementioned DDIM is trained in an end-to-end manner.
To more fully exploit the latent space encoded by the above autoencoder, we proposed a module called Group-Supervised AutoEncoder(GAE) trained in latent space. It is used for decoupling latent space encoded by the trained diffusion autoencoder. We train it in an attribute-swap strategy mentioned in GZS-Net <cit.>. The latent code generated by GAE is passed to DDIM as new z_sem for image generation. But unlike GZS-Net, we impose constraints on the latent space, making GAE an efficient fine-tuning of the original diffusion autoencoder.
In order to narrow the gap between the inputs of the prior and subsequent DDIM, we then jointly train the two previously mentioned models in an end-to-end manner.
We conduct several experiments which clearly demonstrate the effectiveness and generalization of our proposed framework. We can obtain a more decoupled, controllable, and intuitive representation space through our method. We qualitatively and quantitatively demonstrate that using our approach can achieve better performance on information compression and representation disentanglement. Thanks to the better-decoupled latent space, our quality in multi-attribute image reconstruction, and image interpolation are competitive with previous state-of-the-art decoupled representation models.
Our main contributions can be summarized as follows:
* We further explore the disentanglement capability of the latent code extracted from the diffusion autoencoder and enable each part of the latent code to represent an attribute.
* We design a plug-in module called Group-supervised AutoEncoder(GAE) for disentanglement and precise reconstruction in latent codes and images.
* We propose DiffuseGAE, a novel decoupled representation learning framework capable of better image reconstruction and multi-attribute disentanglement, as well as zero-shot image synthesis than other techniques for representation decoupling.
§ BACKGROUND
§.§ Denoising Diffusion Probabilistic Models
DDPM<cit.> is modeled as x=x_0 ⇌x_1 ⇌x_2 ⇌⋯⇌x_T-1⇌x_T=n. The forward process q(x_t|x_t-1) is transforming data x into random noise n in a Markovian manner and the reverse process p(x_t-1|x_t) is restoring n to x, which is the generative model we desired.
Unlike VAEs, the forward process q is modeled as a constant normal distribution.
q(x_t | x_t-1) =𝒩(x_t ; α_t x_t-1, β_t^2 I)
q(x_t | x_0) =𝒩(x_t ; α̅_t x_0, β̅_t^2 I)
where α_t^2+β_t^2=1, α̅_t=α_1α_2 ⋯α_t, β̅_̅t̅ = √(1-a̅_̅t̅^2). We can get q(x_t-1|x_t, x_0) using Bayes theorem.
q(x_t-1|x_t, x_0)
=q(x_t |x_t-1) q(x_t-1|x_0)/q(x_t |x_0)
=𝒩(x_t-1 ; α_t β̅_t-1^2/β̅_t^2x_t+α̅_t-1β_t^2/β̅_t^2x_0, β̅_t-1^2 β_t^2/β̅_t^2I)
It is straightforward to get the variational lower bound using Jensen’s inequality when trading the cross entropy ℒ_CE as optimization objectives, and ℒ_VLB can be further rewritten to a combination of KL-divergence terms.
ℒ_CE =-𝔼_q(𝐱_0)log p_θ(𝐱_0)
≤𝔼_q(𝐱_0: T).[logq(𝐱_1: T | 𝐱_0)/p_θ(𝐱_0: T)]=L_VLB
=L_T+L_T-1+⋯+L_0
where L_T =D_KL[q(𝐱_T|𝐱_0) p_θ(𝐱_T)],
L_t =D_KL[q(𝐱_t | 𝐱_t+1, 𝐱_0)
p_θ(𝐱_t | 𝐱_t+1)]
for 1≤ t ≤ T-1. L_T will be a constant because there are no training parameters in the forward processq, the loss term L_t is parameterized to minimize difference between real noise and predicted noise. Ho et al.<cit.> empirically found that diffusion models similar to equation <ref> work better when ignoring the weight term. After training, We can iteratively generate images via p(x_t-1 | x_t) from random noise.
ℒ_t =𝔼_𝐱_0, ϵ[1/σ_θ(𝐱_t, t)_2^2μ̃_t(𝐱_t, 𝐱_0)-μ_θ(𝐱_t, t)^2]
=𝔼_𝐱_0, ϵϵ[(1-α_t)^2/α_t(1-α̅_t)σ_θ_2^2ϵ_t-ϵ_θ(𝐱_t, t)^2]
∝𝔼_t, 𝐱_0, ϵ, ϵ_t[ϵ_t-ϵ_θ(𝐱_t, t)^2] = ℒ_simple
§.§ Denoising Diffusion Implicit Models
From the derivation of the DDPM mentioned above, it can be viewed that the estimation of p(x_t-1 | x_t) is in a progressive manner. We find that the loss function only depends on p(x_t | x_0), while the sampling process only depends on p(x_t-1 | x_t), p(x_t | x_t-1) and p(x_t | x_t-1) does not appear in the whole process. In sight of this, Song et al. <cit.> reparameterize the posterior q(x_1:T|x_0), turning it into a non-markovian process. Since we have relaxed the condition p(x_t | x_t-1), p(x_t-1 | x_t, x_0) should have a larger solution space, i.e. ∫ p(x_t-1 | x_t, x_0) p(x_t | x_0) d x_t=p(x_t-1 | x_0). Song et al. derive p(x_t-1 | x_t, x_0) using undetermined coefficient method. We can replace x_0 with 1/α̅_t(x_t-β̅_t ϵ_θ(x_t, t)) and reuse DDPM's parameter because p(x_t|x_0) stayed unchanged, then get:
p( x_t-1 | x_t) ≈ p(x_t-1 | x_t, x_0=1/α̅_t(x_t-β̅_t ϵ_θ(x_t, t)))
=𝒩( 1/α_t[x_t-(β̅_t-α_t √(β̅_t-1^2-σ_t^2)) ϵ_θ(x_t, t))],σ_t^2 I )
When σ_t = 0, they empirically observed better quality compared with other σ settings. Meanwhile, the generative process p becomes deterministic, which is a stable mapping from noise to reconstructed image. In addition, due to the semi-linear property of diffusion ODEs, DDIM has better scalability in terms of image generation acceleration. That is why we used DDIM for our decoder.
§ METHODOLOGY
§.§ Defects in the latent space of diffusion autoencoder
We are inspired by observations made regarding Diff-AE's latent space. As mentioned in <cit.>, we can perform single-attribute image synthesis by manipulating the latent code because of its semantic and interpretable latent space. However, such methods are not effective when dealing with more fine-grained categories.
In Figure <ref>, we perform T-SNE visualization with Diff-AE <cit.> and PDAE <cit.> on ilab_20M dataset, a toy vehicles dataset for using
different image variations(e.g. identity, pose, background) in image synthesis. We observe that their latent space does not match their generative power, which hinders the further development of such models. Essentially, if we can further explore the latent space of diffusion autoencoder, for instance, by making it more decoupled and interpretable, we can effectively perform multi-attribute image synthesis. Moreover, according to Eq.(<ref>), the more interpretable information of images x_0 that z_sem contains, the smaller the mean gap exisits between p(x_t-1|x_t) and q(x_t-1|x_t) will be.
Thus we propose a generic learning framework called DiffuseGAE which disentangles the space of diffusion autoencoder to achieve more controllable image manipulation. fig:pipeline shows our pipeline and dataflow for DiffuseGAE. The training process of our proposed method consists of three steps, pre-training, refinement, and fine-tuning.
§.§ Pre-training of DiffuseGAE
In the pre-training process, following the paradigm of diffusion autoencoder <cit.>, we employ an encoder E_ϕ(x_0) for getting latent space from input images and adopt a conditional deterministic diffusion(DDIM) to the decoder D_θ for image reconstruction.
Encoder
we follow the build-up rule in Park et al. <cit.> and redesign a new encoder called Alter-ResNet that replaces the conv block closer to the end of a stage with an attention block. The architecture of our Alter-ResNet is illustrated in Fig. <ref>.
In addition, as shown in Eq.(<ref>), the diffusion model requires two entries: 2-D noise code x_t and z_sem. To transform images into related noise codes, we develop a stochastic encoder by re-deriving the reversed process of DDIM and DPM-solver++ <cit.>.
Since both approaches have the same posterior distribution q(x_t|x_0), their image inverse processes are equal,
x_t+1=√(α̅_t+1)x̂_0
+ √(1-α̅_t+1)·ϵ_θ(z,x_t, t)
where x̂_0 = (x_t-√(1-α̅_t)·ϵ_θ(z,x_t, t))/√(α̅_t).
Decoder
As previously noted, we select DDIM, a deterministic diffusion model as our decoder. To integrate z_sem into DDIM, we adopt adaptive Group Normalization(AdaGN) similar to <cit.> by applying channel-wise scaling & shifting twice to the normalized feature map h∈ℝ^c× h × w.
AdaGN(h, t, z_sem) = z_s (t_sGroupNorm(h) + t_b)+z_b
where z_s, z_b∈ℝ^c=MLP(z_sem) and t_s, t_b∈ℝ^c=MLP(ψ(t)),ψ is used for creating sinusoidal timestep embeddings.
We reimplement the conditional DDIM based on the architecture of a Pandey et al. <cit.> and still adopt the loss function equ:l_simple in DDPM <cit.> to optimize our whole diffusion autoencoder.
§.§ Refining the latent space
After completing the pre-training, we aim to refine the latent space learned by Encoder E_ϕ, where each part of the refined latent code represents an attribute. Typically, the refined latent code z_dis is equally divided into the number of dataset attributes. Inspired by GZS-Net <cit.>, we propose Group-supervised AutoEncoder(GAE) trained on z_sem dataset 𝒟_z in a GROUP manner. Unlike GZS-Net, our proposed GAE is trained on GROUP latent codes, compared to the pixel-wise GZS-Net, our latent-wise approach can achieve faster training and inference speed.
Specifically, we give our definition on GROUP(𝐂, 𝐬).
GROUP(𝐂, 𝐬): Given a dataset 𝒟 containing n samples 𝒟 = {x_i}_i=1^n, each sample x_i is made up of m attributes. We begin by selecting a sample x_C = {x_C^(1), x_C^(2), ⋯, x_C^(m)}.
We then form a set 𝒜 by selecting m attributes from x_C, resulting in a set of C_m^s elements
Finally, we select C^s_m samples from 𝒟 that correspond to C^s_m elements in 𝒜. Formally:
GROUP(𝐂, 𝐬)⇔ x_C ∪{x_i |{x_i^( 1), x_i^(𝐞), x_C^(m)} for 𝐞 in 𝒜}
We select a sample x_C from the dataset 𝒟 to generate GROUP(x_C,1). By iterating this process and passing the result data to encoder of the aforementioned diffusion autoencoder, we obtain dataset 𝒟_z^1 composed of latent code in a group manner as our training dataset for GAE. We leave the case of i>1 for future study. In each group of dataset 𝒟_z^1, there is only one same attribute value between z_c and each of remaining z_sem, |𝐀(z_c) ∩𝐀(z_other)|=1, where z_other is every z_sem in a group except z_c and |𝐀(z)| is used for calculating how many attributes in z. fig:group_overview illustrates the structure of our group in 𝒟_z.
To further disentangle z_sem produced by diffusion autoencoders, we utilize supervised attribute-swap and unsupervised attribute-swap strategies based on 𝒟_z^1 in the training process of GAE.
Supervised Attribute-Swap
Similar to GZS-Net <cit.>, we can consider the attribute of z_sem to be equivalent to the image attribute because there exists a deterministic mapping between them. It is straightforward to consider that two z_sem with identical values for one attribute should have near-exact reconstructions when we swap the part of z_sem(decoupled) which represents the aforementioned attribute. In addition, since each of the z_other has and only has one attribute value identical to z_c, we reassemble the parts of each z_other that represent the same properties as z_c and get z_re. The reconstructed ẑ from all z_re should also be almost identical to z_c. We can encourage disentanglement in the latent space of attributes in this way and obtain the supervised attribute-swap loss ℒ_ss
Unsupervised Attribute-Reassemble
We must impose stricter constraints on the GAE to perform zero-shot image synthesis. we can reconstruct the image by GAE even if it does not exist in the dataset. So we combine the above idea of regression matching by swapping latent variable representations and incorporate the idea of Cycle-GAN <cit.>. We select one part from each z_sem(decoupled) and reassemble them into z_u. ẑ generated by the decoder of GAE from z_u will re-enter into the same GAE for regression matching between z_u and ẑ_̂û. Formally, we have:
z_semE⟶ z_disf⟶ z_u D⟶ẑE⟶ẑ_̂û
where E for encode process, D for decode process, f for the reassemble operation respectively. By calculating the L1 loss between z_u and ẑ_u, then we obtain its unsupervised attribute-reassemble loss ℒ_ur.
We combine a total loss ℒ_dis to disentangle the latent space generated by diffusion autoencoder. Formally, we have:
ℒ_dis = ℒ_r + λ_ssℒ_ss + λ_urℒ_ur
where ℒ_r, ℒ_ss, ℒ_ur respectively represents for reconstruction, supervised attribute swap, and unsupervised attribute reassemble losses. λ_ss,λ_ur>0 is scalar hyperparameters.
Our suggestion values for λ_ss, λ_ur are 1.0 and 0.5, respectively. We train the whole GAE via Adam <cit.> optimizer with learning rate of 4e-5 and no weight decay.
Fine-tuning via jointly training
The gap between ẑ_sem generated by GAE and z_sem extracted from the encoder of the diffusion autoencoder is unavoidable. To minimize this gap, we need to train them in an end-to-end manner. Since our GAE is designed as a plug-in module, we can directly connect it to the diffusion autoencoder as shown in Fig.<ref>. We integrate the loss of diffusion autoencoder and GAE together as our final loss ℒ. Formally:
ℒ = ℒ_simple + γℒ_dis
where ℒ_simple is loss for diffusion autoencoder and ℒ_dis is for GAE. γ control the degree of disentanglement of latent space, default 1. λ_ss and λur are set respectively 0.5 and 0.5. We adopt the same Adam optimizer with no weight decay and learning rate of 1e-5.
Each attribute is fixed to a part of z_dis when we finish training, we can achieve multi-attribute image manipulation by replacing the corresponding part in z_dis.
Our choice of training GAE post-hoc has a few empirical reasons. Firstly, We found it unstable when we train our diffusion autoencoder and GAE in an end-to-end manner. We attribute it to the gap of different optimation directions between them. So we take this way to ensure the powerful generation capability of diffusion autoencoder. Secondly, since our GAE is trained on low-dimension latent code, GAE training takes only a fraction of the whole training time, enabling quick experiments on different GAE with the same diffusion autoencoder.
§.§ Group AutoEncoder(GAE) Design
fig:group_overview shows the overview of GAE and how it works based on GROUP z_sem. Specifically, similar to all kinds of autoencoders,
our GAE is composed of an encoder and decoder as usual, both encoder and decoder are made up of a few MLP blocks and bottleneck blocks as Fig. <ref>.b. Specifically, there are four MLPs and two BottleNecks in both encoder and decoder
Regarding the selection of norm layer, we adopt LayerNorm and GroupNorm in blocks of encoder and decoder depending on different properties of their input z. We choose Silu as our activation function in all blocks.
We experiment with multiple architectures for the two blocks including MLP, MLP + skip connection, and using 2-D feature map from the last stage of encoder E_ϕ as input to GAE.
We have found that MLP + skip connection performs better while being very effective. In addition, we empirically find that the last block of GAE has to be skip connection, otherwise, training of the model will be unstable.
§ EXPERIMENTS
We evaluate our model on ilab_20M <cit.>, a collection of various kinds of vehicle images captured from different angles, with diverse attributes and backgrounds. We adopt the truncated version of ilab_20M which comprises three attributes: identity(6), background(111), and pose(10). Our aim is to leverage our DiffuseGAE to conduct image manipulation in a higher-level semantic latent space.
Following PDAE <cit.> and GZS-Net <cit.>, we respectively set dimensions of z_sem and z_dis to 512 and 100 for all datasets. For ilab_20M, we partition the latent code among attributes as 60 for identity, 20 for background, and 20 for pose.
§.§ Disentanglement results on latent space
We qualitatively and quantitatively evaluate our method on its ability to decouple the latent space z_sem.
We visualize the latent space of the ilab_20M dataset using T-SNE to gain insights into the dataset and identify any patterns or clusters based on identity and background attributes. The results of our proposed GAE are presented in Fig. <ref>, which clearly demonstrate the effectiveness of our method in exploiting the latent space 𝒵 and effectively decoupling z_sem.
Under perfect disentanglement, the latent code of one attribute should always predict its attribute value. The better disentanglement we get, the more accurate the prediction will be when we use the corresponding part in z_sem. Thus we train a linear classifier for each of the attributes and use it to calculate perplexity matrix among attribute pairs. We denote |𝒜_r| as the number of attribute values in attribute r. For all datasets, we splits them 80:20 for 𝒟_train & 𝒟_test. Table <ref> shows the results on three attributes of ilab_20M. the best case should be 1 on the main diagonal and random values,i.e. 1/|𝒜_r| on the off-diagonal. DiffuseGAE outperforms baseline and other disentanglement methods except for AE + DS as it is trained by imposing such constraints on the latent space directly. Therefore it is reasonable that AE + DS performs better in the perplexity matrix evaluation. But our DiffuseGAE achieves comparable results in this metric with AE + DS and it shows inferior image synthesis performance than other methods including ours shown in fig:img_replacementfig:img_recombination.
§.§ Performance on zero-shot image synthesis
We conduct two experiments to evaluate our DiffuseGAE in zero-shot image synthesis. i.e. image replacement and image recombination.
fig:img_replacement presents the qualitative results of our approach on ilab_20M. We compare ours with two methods based on autoencoding, GZS-Net and AE + DS, and against two disentangled and controllable GAN baselines: StarGAN <cit.> and ELEGANT <cit.>. Our method performs image replacement as follows:
We choose two images, the former provides the part in z_dis corresponding to attribute id and pose, while the latter provides the part corresponding to attribute background. We follow Eq.(<ref>) to generate noise map x_T from column identity images. Then we pass the ẑ_sem decoded from recombined ẑ_dis as condition to generate images with T = 100 steps. We can clearly see that our method can effectively improve the quality of generation while ensuring the semantics remains unchanged.
For image recombination, We compare it against GZS-Net, AE + DS, and StarGAN <cit.> in image recombination. Similar to image replacement, we select three images to provide attribute identity background and pose. Each image is responsible for providing an attribute. Then we reassemble them into ẑ_dis and pass their decoding z_sem to conditional DDIM as condition. Since there is no such target image we wish to synthesize, we adopt x_T ∼𝒩(0, I) as noise map to generate images iteratively. Our DiffuseGAE is significantly better than other models in terms of generation quality. And other models which can be used for manipulating attributes are often black-box models, i.e. strong in knowing what to change but not in how to change. Yet our method has a more intuitive pipeline for attribute manipulation.
fig:img_recombination shows our generative results on image recombination.
§.§ Autoencoding reconstruction quality
Although we aim to obtain a better decoupled latent space and good reconstruction quality of an autoencoder might not be a good indicator of disentangled representation learning, it is worth noting that good reconstruction quality in autoencoders is often indicative of better information compression and can be useful for image manipulation.
To quantitatively analyze performance on disentanglement of latent code, We randomly select 50k GROUP(𝐂, 1) images as 𝒟_g for evaluation.
We get z_dis of all x_other via DiffuseGAE in each group and reassemble them according to their corresponding attribute. We take the reassembled latent code ẑ_dis to generate 50k images for fid evaluation. We compute FID(↓) between the 50k generated images and x_C in each group (50k in total).
In order to fully assess the effectiveness of the reconstruction, we also evaluated methods on a wider range of criteria, i.e. SSIM(↑) <cit.>, LPIPS(↓) <cit.>, MSE(↓) in Table <ref>. As we can see, DiffuseGAE outperforms other methods under similar latent dimensions.
§.§ Interpolation of meaningful latent code and trajectories
Given two images x_0^(1) and x_0^(2) from dataset 𝒟, We encode them into (z_dis^(1), x_T^(1)) and (z_dis^(2), x_T^(2)) using our trained model. We evaluate our method qualitatively and quantitatively by running our generative process of conditional DDIM from Slerp(x_T^(1), x_T^(2); α) under the condition ẑ_sem with 100 steps, expecting smooth trajectories in the direction along α. The ẑ_sem is decoded from Lerp(z_dis^(1), z_dis^(2); α) by GAE.
To evaluate the smoothness of our method for single-attribute interpolation. We present examples of each attribute in fig:img_interpolation_single. We only work on the part of z_dis that corresponds to the attribute we desire to interpolate. For attribute identity, we respectively show our results on inter-class and intra-class interpolation. fig:img_interpolation_single shows the trajectories for various values of α∈ [0,1] among three attributes. Our methods produce smooth results with well-preserved details from both images and also hold robust semantic consistency, i.e. only changing what we want, which verifies the disentanglement performance of our approach.
We have also experimented with multi-attribute interpolation on two images where all attributes are different. fig:img_interpolation_multi shows that our method performs better in multi-attribute image interpolation owing to the excellent decoupling capability.
To quantify the smoothness of the interpolation, we adopt Perceptual Path Length(PPL) introduced in StyleGAN <cit.>. Perfect smoothness means that the distance of the images generated by two z_sem close to each other should be small. In particular, we compute the following expectation over 1k pairs of sampled latent codes (z_1, z_2) and α∼ U(0,1), Formally:
PPL=𝔼[1/ϵ^2 d(G(Slerp(z_1, z_2 ; α)), G(Slerp(z_1, z_2 ; α+ϵ)))]
where G indicates image generator, d computes the distance between two generated images based on VGG16 and Slerp(·) denotes spherical interpolation.
We compute the metric on randomly selected 1k pairs of images from ilab_20M with x_T ∼𝒩(0, I). As shown in Fig. <ref>, our method outperforms Diff-AE under all timesteps in terms of interpolation smoothness.
§ RELATED WORK
Disentangled Representation Learning techniques infer independent latent factors from visual objects on the basis of the hypothesis that each factor contributes to the generation of a single semantic attribute. Following Variational Autoencoders <cit.> due to its prior assumption, a class of model <cit.> achieve disentanglement by reducing the KL-divergence between latent code and Norm distribution 𝒩(0,I) which is statistically-independent. while these methods can not achieve ideal disentanglement due to the presence of posterior collapse <cit.> and prior hole <cit.>.
On the other hand, they are unable to synthesize novel images without seeing before. Stronger constraints need to be imposed on the latent code to obtain combinatorial generalizability. So we make the disentanglement further explicit by imposing an inverse constraint on the original conversion, i.e. Supervised Attribute-Swap and Unsupervised Attribute-Reassemble.
Image Manipulation has always been a challenge in the computer vision community with various practical applications <cit.> and Generative Adversarial Networks (GANs) have attracted widespread attention due to their outstanding performance. StarGAN <cit.> jointly train domain information together with images and add mask vectors to facilitate training among different datasets. ELEGANT <cit.> achieves representation disentanglement by constraining semantic encoding positions, thus enabling multi-attribute image manipulation. A wide range of methods <cit.> is implemented to obtain the mapping from images to latent space, thus semantic and natural modifications can be achieved by editing in the latent space. This is what we have achieved in our work.
In recent years, as diffusion models have become increasingly prevalent alternatives, image manipulation based on diffusion models has been intensively investigated. prompt2prompt <cit.> achieve image manipulation through the cross-attention mechanism of the diffusion model without any specifications over pixel space.
Diffusion Probabilistic Models <cit.> are a family of deep generative models that transform Gaussian noise into original images by a multi-step denoising process. They are closely related to score-based generative models <cit.>. Models under this family are now popular for their outstanding sample quality rivaling GANs with more training stability.
Moreover, DPMs have gained attention from the research community and the public due to their impressive performances in conditional generation tasks, such as text-guided image generation <cit.>, image super-resolution <cit.>, image segmentation <cit.> and conditional speech synthesis <cit.>.
However, compared with latent variable generative models, such as VAEs and GANs, DPMs only obtain a sequence of pixel-level corrupted images that lack high-level semantic information. In light of this, Diffusion autoencoders <cit.> are proposed for representation learning via autoencoding. They are competitive with state-of-the-art models on image reconstruction and numerous downstream tasks. We further exploit the semantic information in diffusion autoencoders to achieve more controllable multi-attribute editing.
Moreover, recent works attempt to improve sampling efficiency which has been categorized into two types: learning-free strategies such as PNDM, DPM-solver <cit.> and learning-based strategies such as DiffuseVAE <cit.>, Progressive distillation <cit.>. DDIM <cit.> served as our condition diffusion model can be viewed as first-order case of DPM-solver which can improve sampling speed.
§ CONCLUSION & DISCUSSION
In conclusion, We further explore the disentanglement capability of Diffusion Autoencoder and its potential to extract higher-level semantics.
Conventional diffusion autoencoders tend to fall short in terms of multi-attribute manipulation. To that end, we propose DiffuseGAE, a novel decoupled representation learning framework capable of better image reconstruction and multi-attribute disentanglement and manipulation.
A group-supervised learning autoencoder(GAE) trained in latent space is the key to our method, allowing us to perform zero-shot image synthesis using diffusion autoencoder and disentangled representation.
We believe that future work can explore multi-level semantic disentanglement, similar to StyleGAN <cit.>, to achieve more generic image manipulation.
|
http://arxiv.org/abs/2307.04264v1 | 20230709210516 | Impact of interaction forces in first order many-agent systems for swarm manufacturing | [
"Ferdinando Auricchio",
"Massimo Carraturo",
"Giuseppe Toscani",
"Mattia Zanella"
] | math.AP | [
"math.AP",
"nlin.AO"
] |
./figures/
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
remarkremark[theorem]Remark
|
http://arxiv.org/abs/2307.04556v1 | 20230710134342 | Hairy Kiselev Black Hole Solutions | [
"Yaghoub Heydarzade",
"Maxim Misyura",
"Vitalii Vertogradov"
] | gr-qc | [
"gr-qc"
] |
Hairy Kiselev Black Hole Solutions
Yaghoub Heydarzade^(a)[[email protected]
.tr], Maxim Misyura ^(b,c)[[email protected]]
and Vitalii Vertogradov^(c,d)[[email protected]]
(a) Department of Mathematics, Faculty of Sciences, Bilkent University, 06800 Ankara, Turkey
(b) Department of High Energy and Elementary Particles Physics,
Saint Petersburg State University,
University Emb. 7/9, Saint Petersburg, 199034, Russia
(c) Physics department, Herzen state Pedagogical University of Russia,
48 Moika Emb.,
Saint Petersburg 191186, Russia
(d) SPB branch of SAO RAS, 65 Pulkovskoe Rd, Saint Petersburg
196140, Russia
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
empty
In the realm of astrophysics, black holes exist within non-vacuum cosmological backgrounds, making it crucial to investigate how these backgrounds influence the properties of black holes. In this work,
we first introduce a novel static spherically-symmetric exact solution of Einstein field equations representing a surrounded hairy black hole. This solution represents a generalization of the hairy Schwarzschild solution recently derived using the extended gravitational decoupling method. Then,
we discuss how the new induced modification terms attributed to the primary hairs and various background fields affect the geodesic motion in comparison
to the conventional Schwarzschild case. Although these
modifications may appear insignificant in most cases, we identify specific conditions where they can be
comparable to the Schwarzschild case for some particular background fields.
Keyword: Gravitational decoupling,
Kiselev black hole,
hairy
black hole,
cosmological fields, geodesics
§ INTRODUCTION
In 2019 the Event Horizon Telescope Collaboration unveiled the very first image of a black hole located at the center of the massive elliptical galaxy M87 <cit.>. More recently, scientists have successfully observed the shadow of the supermassive black hole located in the center of our own galaxy <cit.>. These direct observations provide compelling evidence that black holes are not merely abstract mathematical solutions of the Einstein field
equations but real astrophysical objects. Black holes possess a range of
miraculous properties. As instances, they allow for the extraction of energy from their rotation and electric fields <cit.>.
In the vicinity of the black hole's event horizon, particles can possess negative energy <cit.>, and black holes can even function as particle
accelerators <cit.>.
In the realm of astrophysics, black holes are not isolated objects and they inhabit non-vacuum backgrounds. Some research has focused on investigating the direct local effects of cosmic backgrounds on the known black hole solutions. As instance, Babichev et al. <cit.> have shown that in an expanding universe by a phantom scalar field, the mass of a black hole decreases as a result of the accretion of particles of the phantom field into the central black hole. However, one notes that this is a global impact. To explore the local changes in the spacetime geometry near the central black hole, one should consider a modified metric that incorporates the surrounding spacetime. In this context, an analytical static spherically symmetric solution to Einstein filed equations has been presented by Kiselev <cit.>. This solution
generalizes the
usual Schwarzschild black hole to a non-vacuum background, and is characterized by an effective equation of state parameter of the surrounding field of the black hole. Hence it can encompass a wide range of possibilities including quintessence, cosmological constant, radiation and dust-like fields. Several properties of the Kiselev black hole have been extensively investigated in the literature [85-90].
Later, this solution has been generalized to the dynamical Vaidya type
solutions <cit.>.
Such generalizations are well justified due to the non-isolated nature of real-world black holes and their exitance in non-vacuum backgrounds. Black hole solutions coupled to matter fields, such as the Kiselev solution, are particularly relevant for the study of astrophysical black holes with distortions <cit.>. They also play a significant role in investigating the no-hair theorem <cit.>. This theorem states that a black hole can be described only with three charges (i.e.
mass M, electric charge Q and angular momentum a), and it relies
on a crucial assumption that the black hole is isolated, meaning that
the spacetime is asymptotically flat and free from other sources.
However, real-world astrophysical situations do not meet this assumption
. As instances, one may refer to black holes in binary systems, black
holes surrounded by plasma, or those accompanied by accretion disks or
jets in their vicinity. Such situations implies that a black hole may
put on different types of wigs, and hence the applicability of the
standard no-hair theorem for isolated black holes to these cases becomes
questionable <cit.>.
Recently, the minimal geometrical deformations <cit.> and the extended gravitational decoupling
methods <cit.> have been utilized to derive new
solutions from the known seed solutions of Einstein field equations. These techniques have been particularly effective in investigating the violation of the no-hair theorem, the emergence of novel types of hairy black holes, and the exploration of alternative theories of gravity.
Using the extended gravitational
decoupling method, Ovalle et all <cit.>
have introduced a generalization of a Schwarzschild black hole
surrounded by an anisotropic fluid and possesses primary hairs. This new solution has motivated a substantial further research in generalizing this solution to hairy
Kerr <cit.>, Vaidya and generalized
Vaidya <cit.>, regular hairy black holes <cit.> and many others.
Indeed, the gravitational decoupling
method represents a novel and powerful tool for obtaining new solutions to the
Einstein equations.
In the present work, we introduce a novel class of exact solutions to the Einstein field equations, which describe a surrounded
hairy Schwarzschild black hole. This solution serves as a generalization of the previously obtained hairy Schwarzschild solution using the extended gravitational decoupling method. Then, in order to analyze the properties
of the solution, we investigate the effect of the new modification terms, attributed to the primary hairs and various surrounding fields, on the timelike
geodesic motion. Specifically, we compare the effects of modification
terms to the conventional Schwarzschild case. While these modifications
may seem negligible in most scenarios, we identify specific situations
where they can be comparable to the Schwarzschild case, particularly
when specific surrounding fields are present. This analysis sheds light
on the significance of these modifications in certain situations,
providing insights into the behavior of geodesic motion around real
astrophysical black holes.
The structure of the present paper is as follows. In Section 2, we briefly discuss the
hairy Schwarzschild solution by the minimal geometrical deformations and
the extended gravitational decoupling method. In Section 3, we solve the Einstein
field equations in order to obtain the surrounded hairy Schwarzschild black hole. In Section 4, we do analysis of the timelike geodesic
motion. In Section 5, we summarize the new findings and implications of the study.
The system of units c=G=1 will be used throughout the paper.
§ GRAVITATIONAL DECOUPLING AND HAIRY SCHWARZSCHILD BLACK HOLE
Gravitational decoupling method states that one can solve the Einstein
field equations with the matter source
T̃_ik=T_ik+Θ_ik ,
where T_ik represents the energy-momentum tensor of a system for
which the Einstein field equations are
G_ik=8π T_ik .
The solution of the equations (<ref>) is supposed to be
known and represents the seed solution.
Then Θ_ik represents an extra matter sources which causes
additional geometrical deformations.
The Einstein equations for this
new matter source are
G̅_ik=αΘ_ik ,
where α is a coupling constant and G̅_ik is the Einstein
tensor of deformed metric only.
The gravitational decoupling method
states that despite of non-linear nature of the Einstein equations, a
straightforward superposition of these two solutions
(<ref>) and (<ref>)
G̃_ik≡ G_ik+G̅_ik=8π T_ik+αΘ_ik≡T̃_ik ,
is also the solution of the Einstein field equations.
Now, we briefly describe this method. Let us consider the Einstein field
equations
G_ik=R_ik-1/2g_ikR=8π T_ik .
Let the solution of (<ref>) is a static spherically-symmetric
spacetime of the form
ds^2=-e^ν(r)dt^2+e^λ(r)dr^2+r^2 dΩ^2 .
Here dΩ^2=dθ^2+sin^2θ dφ^2 is the metric on unit
two-sphere, ν(r) and λ(r) are functions of r coordinate
only and they are supposed to be known.
The metric (<ref>) is
termed as the seed metric.
Now, we seek the geometrical deformation of (<ref>) by
introducing two new functions ξ=ξ(r) and η=η(r) by:
e^ν(r)→ e^ν(r)+αξ(r),
e^λ(r)→ e^λ(r)+αη(r).
Here α is a coupling constant. Functions ξ and η are
associated with the geometrical deformations of g_00 and g_11 of the
metric (<ref>) respectively. These deformations are caused by
new matter source Θ_ik. If one puts ξ(r)≡ 0 then the
only g_11 component is deformed, leaving g_00 unperturbed -this
is known as the minimal geometrical deformation. It has some drawbacks, for
example, if one considers the existence of a stable black hole possessing
a well-defined event horizon <cit.>. Deforming both
g_00 and g_11 components is an arena of the extended
gravitational decoupling. One should note that this extended
decoupling works only for the vacuum seed solutions of the Einstein
equations and fails for the region where we have the matter source due
to the violation of the Bianchi identities except for several cases. For
example, if one opts for deformations of the Vaidya solution then the
extended gravitational decoupling still works, but it fails for
the generalized Vaidya due to the being of an energy exchange
between two matter sources <cit.>.
Substituting (<ref>) into (<ref>) one obtains
ds^2=-e^ν+αξdt^2+(e^λ+αη)
dr^2+r^2 dΩ^2 .
The Einstein equations for (<ref>) as
G̃_ik= 8 πT̃_ik=8π (T_ik+Θ_ik ) ,
give
8π (T^0_0+Θ^0_0)=-1/r^2+e^-β( 1/r^2-β'/r) ,
8π (T^1_1+Θ^1_1)=-1/r^2+e^-β(1/r^2+ν'+αξ'/r),
8π (T^2_2+Θ^2_2)=1/4e^-β(2(ν”+αξ”)+(ν'+αξ')^2-β'(ν'+αξ')+2ν'+αξ'-β'/r),
e^β≡ e^λ+αη.
Here the prime sign denotes the partial derivative with respect to the radial
coordinate r, and we have 8π( T^2_2+Θ^2_2)=8 π(T^3_3+Θ^3_3) due to
the spherical symmetry.
From (<ref>) one can define the effective energy density
ρ̃, effective radial and tangential
P̃_r, P̃_t pressures as
ρ̃=-(T^0_0+Θ^0_0),
P̃_r=T^1_1+Θ^1_1,
P̃_t=T^2_2+Θ^2_2.
From (<ref>) one can introduce the anisotropy parameter
Π as
Π=P̃_t-P̃_r ,
where if Π≠ 0 then it indicates the anisotropic behaviour of fluid
T̃_ik.
The equations (<ref>) can be decoupled into two
parts[One should remember that it always works for
T_ik≡ 0 i.e. the vacuum solution and for special cases of
T_ik if one opts for Bianchi identities
∇_iT^ik=∇_iΘ^ik=0 with respect to the metric
(<ref>) otherwice there is an energy exchange i.e.
∇_iT̃^ik=0⇒∇_iT^ik=-∇_iΘ^ik≠ 0.]:
the Einstein equations corresponding to the seed solution (<ref>) and the
one corresponding to the geometrical deformations.
If we consider the
vacuum solution i.e. T_ik≡ 0 - Schwarzschild solution then, by
solving the Einstein field equations which correspond the geometrical
deformations, one obtains the hairy Schwarzschild solution <cit.>
ds^2=-(1-2M/r+α e^-r/M-α
l/2) dt^2+(1-2M/r+α e^-r/M-α
l/2) ^-1dr^2+r^2dΩ^2 ,
where α is the coupling constant, l is a new parameter with
length dimension and associated with a primary hair of a black hole. Here
M
is the mass of the black hole in relation with the
Schwarzschild mass ℳ as
M=ℳ+α l/2 .
The impact
of α and l on the geodesic motion, gravitational lensing,
energy extraction and the thermodynamics has been studied
in Refs.<cit.>.
§ SURROUNDED HAIRY SCHWARZSCHILD BLACK HOLE
Recently, the hairy Schwarzschild black hole has been
introduced in <cit.> by using the gravitational decoupling
method.
This solution in the Eddington-Finkelstein coordinates takes the
form
ds^2=-(1-2M/r+α e^-r/M-α
l/2) dv^2+2ε dvdr+r^2dΩ^2 .
Here v is the advanced (ε=+1) or retarded (ε=-1)
Eddington time. In this section, using the approach in <cit.>, we obtain the generalization of this solution
representing a hairy Schwarzschild solution
surrounded by some particular fields motivated by cosmology as in the following
theorem.
Theorem: Considering the extended gravitational
decoupling <cit.> and the principle of additivity and linearity
in the energy-momentum
tensor <cit.> which allows one to get correct limits to the known solutions, the
Einstein field equations admit the following solution in the Eddington-Finkelstein coordinates
ds^2=-(1-2M/r-N/r^3ω+1+α
e^-2r/2M-α l) dv^2+2ε dvdr +r^2dΩ^2
,
where M=ℳ+α l/2 in which ℳ and ℳ are integration constants. The
metric represents a surrounded hairy Schwarzschild solution or equivalently hairy Kiselev solution. We summarize our proof as follows.
Let us consider the general spherical-symmetric spacetime in the form
ds^2=-f(r)dv^2+2ε dvdr+r^2dΩ^2 .
The Einstein tensor components for the metric (<ref>) are
given by
G^0_0=G^1_1=1/r^2(f'r-1+f) ,
G^2_2=G^3_3=1/r^2(rf'+1/2r^2f”) ,
where the prime sign represents the derivative with respect to the radial
coordinate r.
The total energy-momentum tensor should be a combination of
Θ_ik associated to the minimal
geometrical deformations and T_ik associated to the surrounding
fluid as
T̃_ik=αΘ_ik+T_ik .
One should note that here we don't demand the fulfilment of the
condition Θ^ik_;k=T^ik_;k=0.
Instead, we demand that
T̃^ik_;k=0 which follows the Bianchi identity.
The total energy-momentum tensor T̃_ik follows the same symmetries of the Einstein tensor (<ref>) for (<ref>), i.e
T̃^0_0=T̃^1_1 and T̃^2_2=T̃^3_3.
An appropriate general expression for the energy-momentum tensor T_ik of the surrounding
fluid can be <cit.>
T^0_0=-ρ(r) ,
T^i_k= -ρ(r)[ -ξ(1+3ζ)
r^ir_k/r^nr_n+ζδ^i_k] .
From the form of the energy-momentum tensor (<ref>), one can
see that the spatial profile is proportional to the time component,
describing the energy density ρ with arbitrary constants ξ and
ζ depending on the internal structure of the surrounding fields.
The isotropic averaging over the angles results in
<T^i_k>=ξ/3ρδ^i_k=Pδ^i_k ,
since we considered <r^ir_k>=1/3δ^i_kr_n r^n.
Then, we have a barotropic equation of state for the surrounding fluid
as
P(r)=ωρ(r) , ω=ξ/3 ,
where P(r) and ω are the pressure and the constant equation of the
state parameter of the surrounding field, respectively.
Here, one notes that the source T_ik associated to the surrounding fluid should possess the same symmetries in T̃_ik because Θ_ij associated to the
geometrical deformations has the same symmetries as [One should
note that hairy Schwarzschild solution is supported with an anisotropic
fluid Θ^i_k
Θ^0_0=-ρ̅ , Θ^1_1=P̅_r , Θ^2_2=Θ^3_3=P̅_t .
Where the non-vanishing parameter Π=P̅_t-P̅_r indicates
on the anisotropic nature of the energy momentum tensor. So, in order to
satisfy the condition Θ^0_0=Θ^1_1 the anisotropic fluid
should be satisfied with the equation of the state P_r=-ρ̅.]
Θ^0_0=Θ^1_1=-ρ̅,
Θ^2_2=Θ^3_3=P̅_t.
It means that
T^0_0=T^1_1 and T^2_2=T^3_3.
These
exactly provide the so-called principle of additivity and linearity
considered in <cit.> in order to determine the free
parameter ζ of the energy-momentum tensor T_ik of surrounding
fluid as
ζ=-1+3ω/6ω .
Now, substituting (<ref>) and (<ref>) into
(<ref>), the non-vanishing components of the surrounding
energy-momentum tensor T_ik become
T^0_0=T^1_1=-ρ,
T^2_2=T^3_3=1/2(1+3ω) ρ .
Now, we know the Einstein tensor components (<ref>) and the
total energy-momentum tensor
(<ref>). Putting all these equations together, the
G^0_0=T̃^0_0 and G^1_1=T̃^1_1 give us the following equation
1/r^2( f'r-1+f)=-ρ-αρ̅ .
Similarly, the G^2_2=T̃^2_2 and G^3_3=T̃^3_3
components yields
1/r^2(rf'+1/2f”r^2)=1/2
(1+3ω) ρ+P̅ .
Thus, there are four unknown functions f(r), ρ(r), ρ̅(r) and P̅ that can be determined analytically
by the differential equations (<ref>) and
(<ref>) with the following ansatz
f(r)=g(r)-α l/r+α e^-2r/2M-α l .
Then, by substituting (<ref>) into (<ref>)
and (<ref>) and using (<ref>) one obtains the following system of
linear differential equations [Here we apply the Einstein equation
Ĝ^i_k=αΘ^i_k to eliminate ρ̃ and
P̃. Ĝ^i_k is the Einstein tensor for the spacetime
ds^2=-(1-α l/r+α e^-2r/2M-α l)dv^2+2ε dvdr+r^2dΩ^2 .
] for unknowns ρ(r) and g(r)
1/r^2( g'r-1+g)=-ρ,
1/r^2(rg'+1/2g”r^2)=1/2
(1+3ω) ρ .
This second order linear system can be integrated to give the metric function g(r) as
g(r)=1-2ℳ/r-N/r^3ω+1 ,
and the energy density ρ(r) of the surrounding field as
ρ(r)=-3ω N/r^3(ω+1) .
Here ℳ and N are constants of integration representing the
Schwarzschild mass and the surrounding field structure parameter,
respectively.
By putting all these solutions together, we arrive at the
surrounded hairy Schwarzschild solution or equivalently hairy Kiselev solution as
ds^2=-(1-2M/r-N/r^3ω+1+α
e^-2r/2M-α l) dv^2+2ε dvdr +r^2dΩ^2
,
where M=ℳ+α l/2.
From (<ref>), one can see
that the weak energy condition
demands
that parameters ω and N have different signs.
§ TIMELIKE GEODESICS
Considering the geodesic motion in spherically-symmetric spacetime, without loss of generality, one
can consider the equatorial plane
θ=π/2.
The geodesic equations for the metric
(<ref>) can be obtained by varying the following action
S= ∫ℒ dτ =1/2∫(-fv̇^2
+2εv̇ṙ+r^2φ̇^2 ) dτ ,
where the dot sign means the derivative with respect to the proper time
τ.
The spacetime (<ref>) is spherically-symmetric and hence in
addition to the time-translation Killing vector ∂/∂ t, there exists another Killing vector φ^i=∂/∂φ and
the corresponding
conserved
quantity, the angular momentum per mass, is given by
φ^iu_i=∂ℒ/∂φ̇=r^2 φ̇ =L .
Taking into account (<ref>) and (<ref>), one
obtains the following three geodesic equations
φ̇=L/r^2 ,
-1/2f' v̇^2+rφ̇^2-εv̈=0 ,
εr̈=fv̈+f'v̇ṙ ,
where the prime sign denotes the derivative with respect to the radial
coordinate r.
Substituting (<ref>) into
(<ref>), one obtains
fv̈=ε f L^2/r^3-1/2ε f f' v̇^2 .
Now, by applying the timelike geodesic condition g_iku^iu^k=-1 into the
equation above, we find
f'v̇ṙ=-1/2ε f' +1/2ε
ff'-1/2ε f' L^2/r^2v̇^2 .
Substituting the equation (<ref>) into
(<ref>) we arrive at the following general equation of
motion in terms of the metric function f for the radial
coordinate
r̈= -1/2(1+L^2/r^2)
f'+fL^2/r^3 .
Hence, using the obtained metric function (<ref>), one obtains the geodesic
equation in the form
r̈ = (-M/r^2+L^2/r^3-3ML^2/r^4)_sch
+ (-γN/2r^γ+1- (γ+2
)
NL^2/2r^γ+3)_s
+(α/2M-α l e^-2r/2M-α
l+ α L^2/(2M-α l) r^2
e^-2r/2M-α
l-α L^2/r^3 e^-2r/2M-α l)_h ,
where γ=3ω+1.
From (<ref>), one can observe the following interesting points.
* The three terms in the first line are the same as that
of the standard Schwarzschild black hole in which the first term
represents the Newtonian gravitational force, the second term represents
the repulsive centrifugal force, and the third term is the relativistic
correction of Einstein's general relativity which accounts for the
perihelion precession.
* The terms in the second line are new correction terms due to
the presence of the background field which surrounds the hairy
Schwarzschild black hole, in which its first term is similar to the term
of the gravitational potential in the first brackets, while its second term
is similar to the relativistic correction of general relativity.
Then,
regarding (<ref>) one realizes that for the more realistic
non-empty backgrounds, the geodesic equation of any object depends
strictly not only on the mass of the central object of the system and
the conserved angular momentum of the orbiting body, but also on the
background field nature.
The new correction terms may be small in
general in comparison to their Schwarzschild counterparts (the first and
the third term in the first brackets).
However, one can show that, there
are possibilities that these terms are comparable to them.
One also can observe, by using the equation (<ref>), that for ω∈ (-1/3, 0) the Newtonian gravitational force is
strengthened by corrections caused by the surrounding field, on the
other hand, for other values of ω the force is weakened.
If we
consider the same question regarding the second term, which corresponds
to the relativistic correction of Einstein's general relativity, then
for values ω∈(-1, 0) the force is strengthened and this is while
this force is weakened for other values ω.
The
surrounding fluid doesn't have any contributions to the
repulsive centrifugal force.
* The terms in the third line represent modifications by
the primary hairs α and l.
The second term here corresponds to the
relativistic correction of Einstein's general relativity.
The third term here
represents a new correction by the primary hairs to the
repulsive centrifugal force.
One can define the effective distance D
to find out where this force disappears by relation
A_1/A_r≈ 1 where A_r is the Schwarzschild black hole
repulsive centrifugal force, and A_1 is the correction to this force
caused by primary hairs.
So the distance is given by
D= (M-α l/2 )lnα .
Considering a minimal geometrical deformations, α must be
negligible, i.e α≪ 1.
So according to (<ref>), the correction caused by
primary hairs can weaken
the repulsive centrifugal force but it can't cancel it, and hence this
correction is negligible in general.
The first term in (<ref>) contributes a correction
to the Newtonian potential. This can be seen using the effective potential
V_eff(r). One can write the geodesic equations in the form
V_eff(r)=Φ(r)+L^2/2r^2+Φ(r)L^2/r^2 ,
where Φ(r) is related to g_00 metric component via relation
g_00=-(1+2Φ) .
By comparing this with (<ref>), we come to the conclusion that
Φ(r)=-M/r+N/2r^3ω+1-α e^-r/2M-α l .
Now, taking the derivative of V_eff in (<ref>)
with respect to r
d^2r/dτ^2=-dV_eff/dr ,
we arrive at the equation of motion (<ref>).
In order to better understand the nature of the solution obtained in (<ref>),
one can consider the following two groups of forces and
investigate their behaviour for various set of surrounding fields and
primary hair parameters.
G≡M/r^2+γN/2r^γ+1-α/2M-α l
e^-2r/2M-α l
,
H≡3ML^2/r^4+ (γ+2 )
NL^2/2r^γ+3-α L^2/(2M-α l) r^2
e^-2r/2M-α l ,
where G group represents the Newtonian
gravitational force with its modifications and H group corresponds to the
relativistic corrections of the general relativity.
One can ask for the possibilities if the new modifications caused by surrounding fields and
primary hairs can cancel the original forces or change their effect, i.e. change
their sign.
Hence, we are interested in possible cases in which for set of parameters ω,
α and l, the G and H functions are getting negligible values
or they change their signs.
In the following subsections, we consider some specific fields possessing particular equations
of state motivated by cosmology.
However, we can note the following facts which we can derive from
(<ref>). Let's consider the first two terms: for -1<ω<0
these two terms are always positive. However, the second term is
negative for positive ω and we can expect the sign change of H.
Let's consider two particular cases:
* the radiation ω=1/3. In this case, |N|≤ M^2
and the first two terms become negative in the region 0≤ r≤ 2M/3
which is inside the event horizon. Because the third term in
(<ref>) is negligible we can conclude that H is always positive
outside the event horizon region.
* The stiff fluid ω=1. In this case we can put N=-M^4 then
f(r=M)>0. Thus, in this case the event horizon location at the radius
which is less than M. However, the first two terms in (<ref>)
become negative at r=M and H<0 outside the event horizon region.
§.§ Stiff Fluid
We begin our analysis of timelike geodesics with the surrounding fluid having
the average equation of
the state
of a stiff fluid as
P=ρ ⇔ ω =1.
As mentioned previously, the presence of the surrounding field has a weakening effect on the forces given by (<ref>) and (<ref>).
From (<ref>), one observes that N must be negative to maintain a positive energy density for the surrounding fluid.
Our objective is to determine whether the corrections by the surrounding field and primary
hairs can cancel out the initial Schwarzschild forces or potentially can change their sign, and thereby altering the direction of the forces.
In Figure <ref>(a), we plotted three curves corresponding the usual Schwarzschild,
Kiselev and hairy Kiselev black holes.
We observe that the function G
for the hairy Kiselev black hole is negligible but positive near the event
horizon r=2ℳ for the given specific set of parameters.
However, in the case of purely Kiselev
black hole (i.e. α=0), the function G is negative in the
interval 2≤ r ≤ 2.15. One notes that in the purely Kiselev case, we have
a naked singularity (NS) (i.e. g_00≠ 0).[For this set of
parameters g_00 is always negative, i.e. there are not positive roots
of the equation g_00=0 for r∈(0,+∞). On
this reason, we have concluded that r=0 represents a NS because the
Kretschmann scalar diverges at r=0.
By NS we mean that r=0 singularity is not covered with
the event horizon. The question about future-directed non-spacelike
geodesics, which terminate at this singularity in the past, hasn't been
considered within this paper.]
Figure <ref>(b) shows that the function G becomes negative in
the vicinity of the event horizon (i.e. in the region 2≤ r ≤ 2
.02) for the hairy
Kiselev black hole for the set of parameters N = -5.186, l = 1.567.
To have a bigger distance from the event horizon, where the function G can become negative, one should increase |N| and l, however, in this
case, ℳ∼α l/2 and it will not anymore be a minimal
geometrical deformation in (<ref>).
So we can conclude that
G might be negative outside the event horizon but only in its vicinity.
Figure <ref>(c) compares the function H for the Schwarzschild, Kiselev
and hairy
Kiselev cases for the values considered in Figure 1(b).
In order to understand better the influence of a primary hair on a
geodesic motion we put α=0.1 in order to consider bigger values
of l.
Figures <ref>(a) and <ref>(b) show how G changes with
different values of l and N.
One can see that there are regions where it
becomes negative.
However, from these pictures one can't realize if they
deal with a black hole or a naked singularity.
For this purpose one should
impose the condition of existence of an event horizon .
The Figure <ref>(c) shows how G changes in this case.
§.§ Radiation
Here we consider the surrounding field having the average equation
of state of radiation field as
P=ρ/3 ⇔ ω =1/3 .
In this case, the N parameter must be negative, and akin to the previous
case, the surrounding radiation field and primary hairs weaken the forces in
(<ref>)
and (<ref>).
Figure <ref>(a) shows three curves in the pure Schwarzschild, Kiselev
and hairy Kiselev black holes for the parameter values N=-3.729 and l=4.
For the case of surrounding radiation-like field, one observes that the
spacetime is akin to the hairy Reissner-Nordstrom black hole such that
the parameter N plays the role of black hole's electric charge, i.e.
N=-Q^2.
So, in purely Reissner-Nordstrom case, the curve corresponds to the
naked singularity because ℳ^2<Q^2.
In comparison to the stiff fluid case, one notes
that the parameters l and N
are taken greater values to ensure that the function G is negligible.
In Figure <ref>(b), we plotted curves in order to show that hairs
can affect
the geodesic motion and hence G can become negative in the event horizon
vicinity (in the region 2≤ r ≤ 2.042).
In this case, we set N=-3.889
and l=4.16.
One can see that the smaller values of ω we take, the bigger values
of l are required to ensure the negative values of G.
For example, if we
take this value of l (i.e. l=4.16), then, in the case of stiff
fluid, we have N=-15.557 (we obtain this value by demanding that
the event horizon is located at r=ℳ) then the G function
is negative in the region 2≤ r ≤ 2.534.
Thus, one can see that
the region, where negative values of G are possible, shrinks when
ω tends to zero.
Figure <ref>(c) denotes the function H with the values of N
and l as in the previous figure.
Similar to the stiff fluid case, we have several plots for α=0.1.
Figures <ref>(a) and <ref>(b) show that G
becomes
negative at the larger distances in comparison to the stiff fluid case. This
apparently contradicts our previous statement that the smaller ω we consider, the region where G becomes negative becomes smaller.
However, one notes that this is a case
of the naked singularity because if one imposes an
extra condition of the event horizon existence, then for this case
(α=0.1) the G function is always positive outside the horizon
as can be seen from Figure <ref>(c).
§.§ Dust
For a dust-like field we have
P=0 ⇔ ω=0 ,
and we can show analytically that the function G is positive near the
event horizon as follows.
We have
2M+N/r=1+α e^-r/ℳ .
Substituting this into (<ref>) and considering the event horizon at
r=2ℳ, one obtains
1/4ℳ-α/4ℳe^2>0 .
So, for physically relevant values of α, l and N, the
function G is positive outside the event horizon.
Figure <ref>(a) compares three curves of a hairy
Kiselev black hole, purely Kiselev when α=0, and Schwarzschild
case when α=0 and N=0.
These curves are plotted for l=0.5 , N=-0.115.
Figure <ref>(b)
is plotted for the same values of black hole parameters and shows the
behaviour of the function H.
For ω≥ 0 the function H is positive, and its
behaviour is shown in the Figure <ref>(c).
For other values of ω we
could not find the condition (at small values of α) where H
becomes negative.
§.§ Quintessence
For a quintessence-like field, the equation of the state is
P=-2/3ρ ⇔ ω=-2/3 .
In this case, the parameter N must be positive as one can see from
(<ref>).
The function G can be negligible in the vicinity of
the horizon only if either N or L are negative.
However, G can
take negative values but at large distances from the event horizon. As
can be shown from Figure <ref>(a) at values l=0.05 , N=0.028, the
function
G for Kiselev black hole becomes negative at r>8.553. The effect of
N and α on the function H for these values are negligible and
they become considerable only at large distances, as one can see from Figure
<ref>(b) .
§.§ De Sitter background
In this case, the surrounded fluid has the effective equation of the state
P=-ρ ⇔ ω=-1 .
Like in the previous case, the parameter N must be negative, and the
function G must be positive near the event horizon.
Figure <ref>(a) shows that the function G for N=0.016 , l=0.01
becomes negative for r>3.841.
The function H behaves very similar in all three cases as can be seen in Figure <ref>(b).
Figure <ref> shows the behaviour
of G at
α=0
.1 and with an extra condition of the event horizon existence.
Here
<ref>(a) is plotted for positive cosmological constant as
<ref>(b)
for negative cosmological constant - anti-de-Sitter case.
§.§ Phantom field
The equation of the state a phantom-like field is given by
P=-4/3ρ ⇔ ω = -4/3 .
The parameter N must be positive, and as can be seen in Figure
<ref>(a), the
function G takes negative values at the region r>3.056 at l=0.05 ,
N=0.007.
Figure
<ref>(b) shows that for the same values of l and N, the
function H can be negative in the region r>5.433.
§ CONCLUSION
Inspired by the fact that black holes inhabit non-vacuum cosmological
backgrounds, we present a new solution to the Einstein field equations representing a surrounded
hairy
Schwarzschild black hole.
This solution
takes into account both the primary hair and surrounding fields (represented
by an energy-momentum tensor following the linearity and additivity condition
<cit.>) which affect the properties of the black hole.
The effect of the corresponding contributions
on timelike geodesics are discussed. We find that the new induced modifications
can be considerable in certain cases.
In particular, we investigate how
the specified surrounding fields and primary hairs affect the Newtonian and
perihelion precession terms. Our observations are as follows.
* The surrounding fields with
-1/3<ω<0 contribute positively to the Newtonian term, i.e strengthening the gravitational attraction.
* The new corrections to the Newtonian term might be the same order or
even greater for all other cases if one considers a naked singularity.[Considering the positive ω, the weak energy condition demands negative N values. This restriction, for example in
the dust case requires |N|<2M otherwise the metric function f(r) is always positive for all
ranges of r since all the being four terms are positive, and hence there is no event
horizon. In the case of the radiation i.e. ω=1/3, the NS occurs if M^2+N<0 which requires large values of
|N|. Hence one observes that for bigger values of |N|, the
function |G| becomes bigger but
this implies the violation of the condition required for the existence of an event horizon.]
* In the case that the solution represents a black holes, new corrections can be of the same order or
even greater than the Newtonian term in the event
horizon vicinity for ω>0.
* For ω<-1/3 i.e. for effectively repulsive fluids akin
to dark energy models, the correction terms dominate far from the event horizon and mainly near the
cosmological horizon.
Schwarzschild black hole is an idealized vacuum solution and it is
important to consider how it gets deformed in the presence of matter fields.
Another
crucial factor to consider is the impact of the surrounding environment,
particularly the shadow of a black hole in the cosmological background,
which serves as a potential cosmological ruler <cit.>.
The
solution presented in this work can be further investigated to study the
shadow of a hairy Schwarzschild black hole in various
cosmological backgrounds in order to find out how anisotropic
fluid can affect the observational properties, which is a plan of our
upcoming investigations.
It is worthwhile to mention that applying the Newman-Janis <cit.> and
Azreg Ainou <cit.> algorithms one can obtain the
rotating version of the solution presented here. Also, investigation of quasi-normal modes, thermodynamics properties,
accretion process and gravitational lensing of these solutions can help
us to understand better the nature of these objects.
Acknowledgments: V. Vertogradov and M. Misyura say thanks to
grant NUM. 22-22-00112 RSF for financial support.
The work was performed
within the SAO RAS state assignment in the part "Conducting Fundamental
Science Research".
150
bib:9 The Event Horizon Telescope Collaboration, First M87
Event Horizon Telescope results.
I. The shadow of the supermassive black hole, Astrophys.
J. Lett. 875 (2019) L1
bib:10 The Event Horizon Telescope Collaboration, First M87
Event Horizon Telescope results.
II. Array and instrumentation, Astrophys.
J. Lett. 875 (2019)
L2
bib:11 The Event Horizon Telescope Collaboration, First M87 Event Horizon Telescope results. III. Data processing and calibration, Astrophys. J. Lett.
875 (2019) L3
bib:ehtc2022 Akiyama, K. et al. [Event Horizon Telescope Collaboration].
First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole in the Center of the Milky Way. Astrophys. J. Lett. 2022, 930, L12.
bib:pen R. Penrose and R. M. Floyd, Extraction of Rotational Energy from a Black Hole Nature Physical Science
229, 177 (1971).
bib:zaslav O. B. Zaslavskii, Energy extraction from extremal charged black holes due to the Banados-Silk-West effect Phys. Rev. D 86, 124039 (2012)
[arXiv:1207.5209]
bib:mp Lucas Timotheo Sanches, Mauricio Richartz. Energy extraction from non-coalescing black hole binaries
Phys. Rev. D
104, 124025(2021), arXiv:2108.12408 (gr-qc)
bib:charged_vaidya Vitalii Vertogradov, Extraction energy from
charged Vaidya black hole via the Penrose process 2023 Commun. Theor.
Phys. 75 045404
bib:zaslav_rn O. B. Zaslavskii, Negative energy states in the
Reissner-Nordstrom metric Mod. Phys. Lett. A 36,
2150120 (2021), arXiv:2006.02189 [gr-qc].
bib:grib Grib A.A., Pavlov Yu.V., Vertogradov V.D. Geodesics with negative energy in the ergosphere of rotating black holes.
Modern Physics Letters A. Vol. 29,Iss. 20. 2014- P. 14501-14510. [arXiv:1304.7360]
bib:ver_negative V. Vertogradov, The Negative Energy in Generalized Vaidya Spacetime Universe 6(9), 155 (2020),
arXiv:2209.10976 [gr-qc]
bib:ver_ker_negative Vertogradov V.D. Geodesics with negative
energy in the ergosphere of rotating black holes, Gravitation and Cosmology. Vol. 21, Iss. 2. 2015.- PP. 171-174. [arXiv:2210.04674]
babi E. Babichev, V. Dokuchaev, Yu. Eroshenko, Black Hole Mass Decreasing due to Phantom Energy Accretion Phys. Rev.
Lett. 93, 021102 (2004).
bib:bsw M. Banados, J. Silk, S.M. West, Kerr Black Holes as Particle Accelerators to Arbitrarily High Energy Phys. Rev. Lett. 103,
111102 (2009) [arXiv:0909.0169].
bib:zaslav_anti O. B. Zaslavskii, Acceleration of particles by
black holes as a result of deceleration: Ultimate manifestation of kinematic nature of BSW effect Phys. Lett. B 712 (2012)
161 [arXiv:1202.0565]
bib:zaslav_dirty O. B. Zaslavskii, Circular orbits and
acceleration of particles by near-extremal dirty rotating black holes: general approach.
Class.
Quantum Grav. 29 (2012) 205004 [arXiv:1201.5351 [gr-qc]]
bib:grib_complex Grib, A. A., Pavlov Y. V. (2020) Rotating
black holes as sources of high energy particles. Physics of Complex Systems, 1 (1), 40-49.DOI: 10.33910/2687-153X-2020-1-1-40-49
bib:ver_complex Vertogradov, V.D. (2023) On particle
collisions during gravitational collapse of Vaidya spacetimes. Physics of Complex Systems, 4 (1), 17-23.
bib:joshi_col Mandar Patil, Tomohiro Harada, Ken-ichi Nakao,
Pankaj S. Joshi, Masashi Kimura, Infinite efficiency of collisional Penrose process: Can over-spinning Kerr geometry be the source of ultra-high-energy cosmic rays and neutrinos? Phys. Rev. D 93, 104015 (2016) [arXiv:1510.08205 [gr-qc]]
bib:japan_col T. Harada, H. Nemoto, U. Miyamoto, Upper limits
of particle emission from high-energy collision and reaction near a maximally rotating Kerr black hole Phys.Rev
.D86:024027,2012 [arXiv:1205.7088]
bib:50 V. V. Kiselev, Class. Quintessence and black holes Quant. Grav. 20, 1187 (2003).
bib:kvaidya Heydarzade, Y., Darabi, F. Surrounded Vaidya
solution by cosmological fields. Eur. Phys. J. C 2018,
78, 582.
bib:kbvaidya Heydarzade, Y.; Darabi, F. Surrounded
Bonnor-Vaidya solution by cosmological fields. Eur. Phys. J. C
2018, 78, 1004.
bib:kevaidya Y. Heydarzade, F. Darabi Surrounded
Vaidya black holes: apparent horizon properties, Eur. Phys. J. C (2018) 78:342.
bib:51 R. Geroch and J.B. Hartle, Distorted black holes J. Math. Phys. 23, 680
(1982).
bib:52 S. Fairhurst, B. Krishnan, Distorted Black Holes with Charge Int. J. Mod. Phys. D10, 691
(2001).
bib:53 S.R. Brandt, E. Seidel, Evolution of distorted rotating black holes. III. Initial data Phys. Rev. D54, 1403 (1996).
bib:54 M. Ansorg, D. Petroff, Black holes surrounded by uniformly rotating rings Phys. Rev. D 72.2, 024019 (2005).
bib:55 S. W. Hawking, Black holes in general relativity Commun. Math. Phys. 25, 152 (1972).
bib:56 J.D. Brown and V. Husian, Black holes with short hair Int. J. Mod. Phys. D6, 563
(1997).
bib:57 S.Droz, M. Heusler, N. Straumann, New black hole solutions with hair Phys. Lett. B, 268
(3-4), 371 (1991).
bib:58 J. Barranco, A. Bernal, J.C. Degollado, A.
Diez-Tejedor, M. Megevand, M. Alcubierre, D. Ndnez, O. Sarbach, Schwarzschild Black Holes can Wear Scalar Wigs Phys.
Rev. Lett. 109, 081102 (2012).
bib:noh4 J.D. Bekenstein, Novel "no-scalar-hair" theorem for
black holes Phys. Rev. D 51(12), R6608 (1995).
bib:hok_hair Hawking, S.W.; Perry, M.J.; Strominger, A. Soft Hair on Black Holes. Phys. Rev. Lett. 2016, 116, 231301.
bib:mgd1 J. Ovalle, Extending the geometric deformation: New
black hole solutions. Int. J. Mod. Phys. Conf. Ser., 41, 1660132 (2016)
[arXiv:1510.00855 [gr-qc]]
bib:mgd2 Roberto Casadio, Jorge Ovalle, Roldao da Rocha, The
Minimal Geometric Deformation Approach Extended. Class. Quantum Grav. 32 (2015) 215020 [arXiv:1503.02873 [gr-qc]]
bib:mgd3 Ovalle, J.; Casadio, R.; Rocha, R.D.; Sotomayor, A.; Stuchlik, Z. Black holes by gravitational decoupling. Eur. Phys. J. C 2018,
bib:gd1 Ovalle, J. Decoupling gravitational sources in general relativity: From perfect to anisotropic fluids. Phys. Rev. 2017, D95, 104019.
bib:gd2 Ovalle, J. Decoupling gravitational sources in general relativity: The extended case. Phys. Lett. B 2019, 788, 213.
bib:gd3 Contreras, E.; Ovalle, J.; Casadio, R. Gravitational decoupling for axially symmetric systems and rotating black holes. Phys. Rev. D 2021, 103, 044020.
bib:bh1 Ovalle, J.; Casadio, R.; Contreras, E.;
Sotomayor, A. Hairy black holes by gravitational decoupling. Phys. Dark Universe 2021,
bib:hairy_kerr S. Mahapatra and I. Banerjee, Rotating hairy
black holes and thermodynamics from gravitational decoupling, Phys. Dark
Univ. 39 (2023) 101172 [arXiv:
2208.05796],
bib:vermax Vitalii Vertogradov, Maxim Misyura "Vaidya and Generalized Vaidya
Solutions by Gravitational Decoupling"Universe 2022, 8(11), 567;
doi:10.3390/universe8110567 [arXiv:2209.07441 [gr-qc]]
bib:vermax2 Vitalii Vertogradov, Maxim Misyura, The Regular
Black Hole by Gravitational Decoupling.
Phys. Sci.
Forum 2023, 7(1), 27
bib:ovalle_regular Jorge Ovalle, Roberto Casadio, Andrea
Giusti, Regular hairy black holes through Minkowski deformation.
[arXiv:2304.03263 [gr-qc]]
bib:85 M. Jamil, S. Hussain, Dynamics of particles around a
Schwarzschild-like black hole in the presence of quintessence and magnetic field B. Majeed, Eur. Phys. J. C 75.
1, 24 (2015).
bib:86 I. Hussain, S. Ali, Marginally stable circular orbits in the Schwarzschild black hole
surrounded by quintessence matter Eur. Phys. J.l Plus 131. 8, 275
(2016).
bib:87 B. Malakolkalami, K. Ghaderi, The null geodesics of the
Reissner-Nordstrom black hole surrounded by quintessence Mod. Phys. Lett
. A 30
.10, 1550049 (2015).
bib:88 S. Fernando, Schwarzschild black hole surrounded by quintessence: Null geodesics Gen. Rel. Grav. 44.7, 1857 (2012).
bib:89 R. Uniyal, N.C. Devi, Geodesic Motion in Schwarzschild Spacetime Surrounded by Quintessence H. Nandan, K.D. Purohit, Gen.
Rel. Grav. 47.2, 16 (2015).
bib:90 S. Fernando, S. Meadows, K. Reis, Null Trajectories and Bending of Light in Charged Black Holes with Quintessence Int. J. Theo. Phys.
54.10, 3634 (2015).
bib:geod Ramos, A.; Arias, C.; Avalos, R.; Contreras,E. Geodesic motion
around hairy black holes. Annals Phys. 2021, 431,
168557.
bib:lens Sohan Kumar Jha, Anisur Rahaman, Gravitational
lensing by the hairy Schwarzschild black hole. arXiv:2205.06052[gr-qc]
bib:energy Zhen Li, Faqiang Yuan, Energy extraction via
Comisso-Asenjo mechanism from rotating hairy black hole. [arXiv:2304
.12553 [gr-qc]]
bib:thermo Cavalcanti, R.T.; Alves, K.d.S.; da Silva, J.M.H.
Near horizon thermodynamics of hairy black holes from gravitational
decoupling.
Universe 2022, 8, 363.
bib:ver_thermo Vertogradov V. D., Kudryavcev D. A. On the temperature of hairy black holes. Physics of Complex Systems. Vol. 4, no. 2, 2023
bib:106Y. Heydarzade, F. Darabi, Black Hole Solutions Surrounded by Perfect Fluid in Rastall Theory Phys. Lett. B, 771, 365 (2017).
bib:ruller Oleg Yu. Tsupko, Zuhui Fan, Gennady S.
Bisnovatyi-Kogan, Black hole shadow as a standard ruler in cosmology. Classical and Quantum Gravity, 37, 065016 (2020) [arXiv:1905.10509 [gr-qc]]
bib:71r E. T. Newman and A. I. Janis, "Note on the Kerr
spinning particle metric," 10.1063/1 J. Math. Phys. 6 (1965) 915-917.
bib:73r M. Azreg-Ainou, "Regular and conformal regular cores
for static and rotating solutions," 10.1016/j.physletb.2014.01 Phys.
Lett. B 730 (2014) 95-98, [arXiv:1401.0787 [gr-qc]].
bib:74r M. Azreg-Ai'nou, "From static to rotating to conformal
static solutions: Rotating imperfect fluid wormholes with(out) electric
or magnetic field," epjc/s10052-014-2865-8 Eur. Phys. J. C 74 no. 5,
(2014) 2865, [arXiv:1401.4292 [gr-qc]].
|
http://arxiv.org/abs/2307.04487v1 | 20230710112041 | The abundance and excitation of molecular anions in interstellar clouds | [
"M. Agundez",
"N. Marcelino",
"B. Tercero",
"I. Jimenez-Serra",
"J. Cernicharo"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Molecular anions in the ISM
Agúndez et al.
Instituto de Física Fundamental, CSIC, Calle Serrano 123, E-28006 Madrid, Spain
[email protected]
Observatorio Astronómico Nacional, IGN, Calle Alfonso XII 3, E-28014 Madrid, Spain
Observatorio de Yebes, IGN, Cerro de la Palera s/n, E-19141 Yebes, Guadalajara, Spain
Centro de Astrobiología (CSIC/INTA), Ctra. de Torrejón a Ajalvir km 4, 28806, Torrejón de Ardoz, Spain
We report new observations of molecular anions with the Yebes 40m and IRAM 30m telescopes toward the cold dense clouds TMC-1 CP, Lupus-1A, L1527, L483, L1495B, and L1544. We detected for the first time C_3N^- and C_5N^- in Lupus-1A and C_4H^- and C_6H^- in L483. In addition, we report new lines of C_6H^- toward the six targeted sources, of C_4H^- toward TMC-1 CP, Lupus-1A, and L1527, and of C_8H^- and C_3N^- in TMC-1 CP. Excitation calculations using recently computed collision rate coefficients indicate that the lines of anions accessible to radiotelescopes run from subthermally excited to thermalized as the size of the anion increases, with the degree of departure from thermalization depending on the H_2 volume density and the line frequency. We noticed that the collision rate coefficients available for the radical C_6H cannot explain various observational facts, which advises for a revisitation of the collision data for this species. The observations presented here, together with observational data from the literature, are used to model the excitation of interstellar anions and to constrain their abundances. In general, the anion-to-neutral ratios derived here agree within 50 % (a factor of two at most) with literature values, when available, except for the C_4H^-/C_4H ratio, which shows higher differences due to a revision of the dipole moment of C_4H. From the set of anion-to-neutral abundance ratios derived two conclusions can be drawn. First, the C_6H^-/C_6H ratio shows a tentative trend in which it increases with increasing H_2 density, as expected from theoretical grounds. And second, it is incontestable that the higher the molecular size the higher the anion-to-neutral ratio, which supports a formation mechanism based on radiative electron attachment. Nonetheless, calculated rate coefficients for electron attachment to the medium size species C_4H and C_3N are probably too high and too low, respectively, by more than one order of magnitude.
The abundance and excitation of molecular anions
in interstellar cloudsBased on observations carried out with the Yebes 40m telescope (projects 19A003, 20A014, 20A016, 20B010, 20D023, 21A006, 21A011, 21D005, 22B023, and 23A024) and the IRAM 30m telescope. The 40m radio telescope at Yebes Observatory is operated by the Spanish Geographic Institute (IGN; Ministerio de Transportes, Movilidad y Agenda Urbana). IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain).
M. Agúndez1, N. Marcelino2,3, B. Tercero2,3, I. Jiménez-Serra4, J. Cernicharo1
Received; accepted
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The discovery of negatively charged molecular ions in space has been a relatively recent finding <cit.>. To date the inventory of molecular anions detected in interstellar and circumstellar clouds consists of four hydrocarbon anions, C_4H^- <cit.>, C_6H^- <cit.>, C_8H^- <cit.>, and C_10H^- <cit.>, and four nitrile anions, CN^- <cit.>, C_3N^- <cit.>, C_5N^- <cit.>, and C_7N^- <cit.>. The astronomical detection of most of these species has been possible thanks to the laboratory characterization of their rotational spectrum <cit.>. However, the astronomical detection of C_5N^-, C_7N^-, and C_10H^- is based on high level ab initio calculations and astrochemical arguments <cit.>. In fact, in the case of C_10H^- it is not yet clear whether the identified species is C_10H^- or C_9N^- <cit.>.
The current situation is such that there is only one astronomical source where the eight molecular anions have been observed, the carbon-rich circumstellar envelope IRC +10216 <cit.>, while the first negative ion discovered, C_6H^- <cit.>, continues to be the most widely observed in astronomical sources <cit.>.
Observations indicate that along each of the series C_2n+2H^- and C_2n-1N^- (with n = 1, 2, 3, 4) the anion-to-neutral abundance ratio increases with increasing molecular size <cit.>. This is expected according to the formation mechanism originally proposed by <cit.>, which involves the radiative electron attachment to the neutral counterpart of the anion <cit.>. However, the efficiency of this mechanism in interstellar space has been disputed <cit.>, and alternative formation mechanisms have been proposed <cit.>. Currently there is not yet consensus on the formation mechanism of molecular anions in space (see discussion in ). Moreover, detections of negative ions other than C_6H^- in interstellar clouds are scarce, and thus our view of the abundance of the different anions in interstellar space is statistically very limited.
Apart from the anion-to-anion behavior it is also interesting to know which is the source-to-source behavior. That is, how does the abundance of anions behave from one source to another. Based on C_6H^- detections, the C_6H^-/C_6H abundance ratio seems to increase with increasing H_2 volume density <cit.>, which is expected from chemical considerations (e.g., ; see also Sect. <ref>). However, most anion detections in interstellar clouds have been based on one or two lines and their abundances have been estimated assuming that their rotational levels are populated according to local thermodynamic equilibrium (LTE), which may not be a good assumption given the large dipole moments, and thus high critical densities, of anions. Recently, rate coefficients for inelastic collisions with H_2 or He have been calculated for C_2H^- <cit.>, C_4H^- <cit.>, C_6H^- <cit.>, CN^- <cit.>, C_3N^- <cit.>, and C_5N^- <cit.>, which makes it possible to study the excitation of anions in the interstellar medium.
Here we report new detections of anions in interstellar sources. Concretely, we detected C_3N^- and C_5N^- in Lupus-1A and C_6H^- and C_4H^- in L483. We also present the detection of new lines of C_4H^-, C_6H^-, C_8H^-, C_3N^-, and C_5N^- in interstellar clouds where these anions have been already observed. We use the large observational dataset from this study, together with that available from the literature, to review the observational status of anions in interstellar clouds and to carry out a comprehensive analysis of the abundance and excitation of anions in the interstellar medium.
§ OBSERVATIONS
§.§ Yebes 40m and IRAM 30m observations from this study
The observations of cold dark clouds presented in this study were carried out with the Yebes 40m and IRAM 30m telescopes. We targeted the starless core TMC-1 at the cyanopolyyne peak position (hereafter TMC-1 CP)[TMC-1 CP: α_J2000=4^ h 41^ m 41.9^ s and δ_J2000=+25^∘ 41' 27.0”], the starless core Lupus-1A[Lupus-1A: α_J2000=15^ h 42^ m 52.4^ s and δ_J2000=-34^∘ 07' 53.5”], the prestellar cores L1495B[L1495B: α_J2000=4^ h 15^ m 41.8^ s and δ_J2000=+28^∘ 47' 46.0”] and L1544[L1544: α_J2000=5^ h 4^ m 18.0^ s and δ_J2000=+25^∘ 11' 10.0”], and the dense cores L1527[L1527: α_J2000=4^ h 39^ m 53.9^ s and δ_J2000=+26^∘ 03' 11.0”] and L483[L483: α_J2000=18^ h 17^ m 29.8^ s and δ_J2000=-4^∘ 39' 38.3”], which host a Class 0 protostar. All observations were done using the frequency switching technique to maximize the on-source telescope time and to improve the sensitivity of the spectra.
The Yebes 40m observations consisted in a full scan of the Q band (31-50 GHz) acquired in a single spectral setup with a 7 mm receiver, which was connected to a fast Fourier transform spectrometer that provides a spectral resolution of 38 kHz <cit.>. The data of TMC-1 CP are part of the on-going QUIJOTE line survey <cit.>. The spectra used here were obtained between November 2019 and November 2022 and contain a total of 758 h of on-source telescope time in each polarization (twice this value after averaging both polarizations). Two frequency throws of 8 and 10 MHz were used. The sensitivity ranges from 0.13 to 0.4 mK in antenna temperature. The data of L1544 were taken between October and December 2020 toward the position of the methanol peak of this core, where complex organic molecules have been detected <cit.>, and are part of a high-sensitivity Q-band survey (31 h on-source; Jiménez-Serra et al. in prep.). The data for the other sources were obtained from July 2020 to February 2023 for L483 (the total on-source telescope time is 103 h), from May to November 2021 for L1527 (40 h on-source), from July 2021 to January 2023 for Lupus-1A (120 h on-source), and from September to November 2021 for L1495B (45 h on-source). Different frequency throws were adopted depending on the observing period, which resulted from tests done at the Yebes 40m telescope to find the optimal frequency throw. We used frequency throws of 10 MHz and 10.52 MHz for L483, 10 MHz for L1544, 8 MHz for L1527, and 10.52 MHz for Lupus-1A and L1495B. The antenna temperature noise levels, after averaging horizontal and vertical polarizations, are in the range 0.4-1.0 mK for L483, 1.3-1.8 mK for L1544, 0.7-2.7 mK for L1527, 0.7-2.8 mK for Lupus-1A, and 0.8-2.6 mK for L1495B.
The observations carried out with the IRAM 30m telescope used the 3 mm EMIR receiver connected to a fast Fourier transform spectrometer that provides a spectral resolution of 49 kHz. Different spectral regions within the 3 mm band (72-116 GHz) were covered depending on the source. The data of TMC-1 CP consist of a 3 mm line survey <cit.> and spectra observed in 2021 <cit.>. The data of L483 consists of a line survey in the 80-116 GHz region (see ), together with data in the 72-80 GHz region, which are described in <cit.>. Data of Lupus-1A, L1495B, L1521F, L1251A, L1512, L1172, and L1389 were observed from September to November 2014 during a previous search for molecular anions at mm wavelengths (see ). Additional data of Lupus-1A were gathered during 2021 and 2022 during a project aimed to observe H_2NC <cit.>. In the case of L1527, the IRAM 30m data used were observed in July and August 2007 with the old ABCD receivers connected to an autocorrelator that provided spectral resolutions of 40 or 80 kHz <cit.>.
The half power beam width (HPBW) of the Yebes 40m telescope is in the range 35-57 ” in the Q band, while that of the IRAM 30m telescope ranges between 21 ” and 34 ” in the 3 mm band. The beam size can be fitted as a function of frequency as HPBW(”) = 1763/ν(GHz) for the Yebes 40m telescope and as HPBW(”) = 2460/ν(GHz) for the IRAM 30m telescope. Therefore, the beam size of the IRAM 30m telescope at 72 GHz is similar to that of the Yebes 40m at 50 GHz. The intensity scale in both the Yebes 40m and IRAM 30m telescopes is antenna temperature, T_A^*, for which we estimate a calibration error of 10 %. To convert antenna temperature into main beam brightness temperature see foot of Table <ref>. All data were analyzed using the program CLASS of the GILDAS software[https://www.iram.fr/IRAMFR/GILDAS/].
§.§ Observational dataset of anions in dark clouds
In Table <ref> we compile the line parameters of all the lines of negative molecular ions detected toward cold dark clouds, including lines from this study and from the literature. The line parameters of C_7N^- observed toward TMC-1 CP are given in <cit.> and are not repeated here. In the case of C_10H^- in TMC-1 CP we do not include line parameters here because the detection by <cit.> is not based on individual lines but on spectral stack of many lines. The lines of molecular anions presented in this study are shown in Fig. <ref> for C_6H^-, Fig. <ref> for C_4H^-, and Fig. <ref> for the remaining anions, i.e., C_8H^-, C_3N^-, and C_5N^-. Since we are interested in the determination of anion-to-neutral abundance ratios, we also need the lines of the corresponding neutral counterpart of each molecular anion, which are the radicals C_4H, C_6H, C_8H, C_3N, and C_5N. The velocity-integrated intensities of the lines of these species are given in Table <ref>.
According to the literature, the most prevalent molecular anion, C_6H^-, has been detected in 11 cold dark clouds: TMC-1 CP <cit.>, L1527 and Lupus-1A <cit.>, L1544 and L1521F <cit.>, and L1495B, L1251A, L1512, L1172, L1389, and TMC-1 C <cit.>. All these detections were based on two individual or stacked lines lying in the frequency range 11-31 GHz (see Table <ref>). Here we present additional lines of C_6H^- in the Q band for TMC-1 CP, Lupus-1A, L1527, L1495B, and L1544, together with the detection of C_6H^- in a new source, L483, through six lines lying in the Q band (see Fig. <ref>).
Molecular anions different to C_6H^- have turned out to be more difficult to detect as they have been only seen in a few sources. For example, C_4H^- has been only detected in three dark clouds, L1527 <cit.>, Lupus-1A <cit.>, and TMC-1 CP <cit.>. These detections rely on one or two lines (see Table <ref>). Here we report the detection of two additional lines of C_4H^- in the Q band toward these three sources, together with the detection of C_4H^- in one new source, L483 (see Fig. <ref>).
The hydrocarbon anion C_8H^- has been observed in two interstellar sources. <cit.> reported the detection of four lines in the 12-19 GHz frequency range toward TMC-1 CP, while <cit.> reported the detection of this anion in Lupus-1A through two stacked lines at 18.7 and 21.0 GHz (see Table <ref>). Thanks to our Yebes 40m data, we present new lines of C_8H^- in the Q band toward TMC-1 CP (see Fig. <ref>).
Finally, the nitrile anions C_3N^- and C_5N^- have resulted to be quite elusive as they have been only seen in one cold dark cloud, TMC-1 CP <cit.>. Here we present the same lines of C_3N^- and C_5N^- reported in <cit.> in the Q band, but with improved signal-to-noise ratios, plus two additional lines of C_3N^- in the 3 mm band. We also present the detection of C_3N^- and C_5N^- in one additional source, Lupus-1A (see Fig. <ref>).
§ PHYSICAL PARAMETERS OF THE SOURCES
The interstellar clouds where molecular anions have been detected are 12 in total and comprise cold dense cores in different evolutionary stages, such as starless, prestellar, and protostellar (see Table <ref>). The classification as protostellar cores is evident in the cases of L1527 and L483 as the targeted positions are those of the infrared sources IRAS 04368+2557 and IRAS 18148-0440, respectively <cit.>. We also classified L1251A, L1172, and L1389 as protostellar sources based on the proximity of an infrared source (L1251A IRS3, CB17 MMS, and IRAS 21017+6742, respectively) to the positions targeted by <cit.>. The differentiation between starless and prestellar core is in some cases more ambiguous. In those cases we followed the criterion based on the N_2D^+/N_2H^+ column density ratio by <cit.>. In any case, for our purposes it is not very important whether a given core is starless or prestellar.
To study the abundance and excitation of molecular anions in these 12 interstellar sources through non-LTE calculations we need to know which are the physical parameters of the clouds, mainly the gas kinetic temperature and the H_2 volume density, but also the emission size of anions and the linewidth. The adopted parameters are summarized in Table <ref>.
Given that C_6H^- has not been mapped in any interstellar cloud to date, it is not known whether the emission of molecular anions in each of the 12 sources is extended compared to the telescope beam sizes, which are in the range 21-67 ” for the Yebes 40m, IRAM 30m, and GBT telescopes at the frequencies targeted for the observations of anions. Therefore one has to rely on maps of related species. In the case of TMC-1 CP we assume that anions are distributed in the sky as a circle with a diameter of 80 ” based on the emission distribution of C_6H mapped by <cit.>. Recent maps carried out with the Yebes 40m telescope <cit.> support the previous results of <cit.>. For the remaining 11 sources, the emission distribution of C_6H is not known and thus we assume that the emission of anions is extended with respect to the telescope beam. This assumption is supported by the extended nature of HC_3N emission in the cases of L1495B, L1251A, L1512, L1172, L1389, and TMC-1 C, according to the maps presented by <cit.>, and of multiple molecular species, including C_4H, in L1544, according to the maps reported by <cit.>.
The linewidth adopted for each source (see Table <ref>) was calculated as the arithmetic mean of the values derived for the lines of C_6H^- in the Q band for TMC-1 CP, Lupus-1A, L1527, L1495B, and L1544. In the case of L483 we adopted the value derived by <cit.> from the analysis of all the lines in the 3 mm band. For L1521F, L1251A, L1512, L1172, and L1389 the adopted linewidths come from IRAM 30m observations of CH_3CCH in the 3 mm band (see Sect. <ref>). Finally, for TMC-1 C we adopted as linewidth that derived for HC_3N by <cit.>.
The gas kinetic temperature was determined for some of the sources from the J = 5-4 and J = 6-5 rotational transitions of CH_3CCH, which lie around 85.4 and 102.5 GHz, respectively. We have IRAM 30m data of these lines for TMC-1 CP, Lupus-1A, L483, L1495B, and L1521F, while for L1527 we used the data obtained with the Nobeyama 45m telescope by <cit.>. Typically, the K = 0, 1, and 2 components are detected, which allow us to use the line intensity ratio between the K = 1 and K = 2 components, belonging to the E symmetry species, to derive the gas kinetic temperature. Since transitions with Δ K 0 are radiatively forbidden, the relative populations of the K = 1 and K = 2 levels are controlled by collisions with H_2 and thus are thermalized at the kinetic temperature of H_2. We do not use the K = 0 component because it belongs to a different symmetry species, A, and interconversion between A and E species is expected to be slow in cold dense clouds and thus their relative populations may not necessarily reflect the gas kinetic temperature.
For TMC-1 CP we derive kinetic temperatures of 8.8 ± 0.6 K and 9.0 ± 0.6 K from the J = 5-4 and J = 6-5 lines of CH_3CCH, respectively. Similarly, using the J = 8-7 through J = 12-11 lines of CH_3C_4H, which lie in the Q band, we derive temperatures of 9.1 ± 0.7 K, 8.7 ± 0.6 K, 9.0 ± 0.6 K, 8.1 ± 0.7 K, and 9.1 ± 0.8 K, respectively. We thus adopt a gas kinetic temperature of 9 K, which is slightly lower than values derived in previous studies, 11.0 ± 1.0 K and 10.1 ± 0.9 K at two positions close to the cyanopolyyne peak using NH_3 <cit.> and 9.9 ± 1.5 K from CH_2CCH <cit.>. In Lupus-1A we derive temperatures of 11.4 ± 1.7 K and 10.2 ± 1.1 K from the J = 5-4 and J = 6-5 lines of CH_3CCH, respectively. We thus adopt a gas kinetic temperature of 11 K, which is somewhat below the value of 14 ± 2 K derived in <cit.> using the K = 0, 1, and 2 components of the J = 5-4 transition of CH_3CCH. In L1527 we derive 13.6 ± 2.5 K and 15.1 ± 2.4 K from the line parameters of CH_3CCH J = 5-4 and J = 6-5 reported by <cit.>. We thus adopt a kinetic temperature of 14 K, which agrees perfectly with the value of 13.9 K derived by <cit.> using CH_3CCH as well. The gas kinetic temperature in L483 has been estimated to be 10 K by <cit.> using NH_3, while <cit.> derive values of 10 K and 15 ± 2 K using either ^13CO or CH_3CCH. A new analysis of the CH_3CCH data of <cit.> in which the weak K = 3 components are neglected and only the K = 1 and K = 2 components are used results in kinetic temperatures of 11.5 ± 1.1 K and 12.6 ± 1.5 K, depending on whether the J = 5-4 or J = 6-5 transition is used. We thus adopt a kinetic temperature of 12 K for L483. For L1495B we derive 9.1 ± 0.9 K and 9.2 ± 0.7 K from CH_3CCH J = 5-4 and J = 6-5, and we thus adopt a kinetic temperature of 9 K. In L1521F we also adopt a gas kinetic temperature of 9 K since the derived temperatures from CH_3CCH J = 5-4 and J = 6-5 are 9.0 ± 0.7 K and 8.9 ± 0.9 K. The value agrees well with the temperature of 9.1 ± 1.0 K derived by <cit.> using NH_3. For the remaining cores, the gas kinetic temperatures were taken from the literature, as summarized in Table <ref>.
To estimate the volume density of H_2 we used the ^13C isotopologues of HC_3N when these data were available. We have Yebes 40m data of the J = 4-3 and J = 5-4 lines of H^13CCCN, HC^13CCN, and HCC^13CN for TMC-1 CP, Lupus-1A, L1527, and L483. Data for one or various lines of these three isotopologues in the 3 mm band are also available from the IRAM 30m telescope (see Sect. <ref>) or from the Nobeyama 45 telescope (for L1527; see ). Using the ^13C isotopologues of HC_3N turned out to constrain much better the H_2 density that using the main isotopologue because one gets rid of optical depth effects. We carried out non-LTE calculations under the Large Velocity Gradient (LVG) formalism adopting the gas kinetic temperature and linewidth given in Table <ref> and varying the column density of the ^13C isotopologue of HC_3N and the H_2 volume density. As collision rate coefficients we used those calculated by <cit.> for HC_3N with ortho and para H_2, where we adopted a low ortho-to-para ratio of H_2 of 10^-3, which is theoretically expected for cold dark clouds (e.g., ). The exact value of the ortho-to-para ratio of H_2 is not very important as long as the para form is well in excess of the ortho form, so that collisions with para H_2 dominate. The best estimates for the column density of the ^13C isotopologue of HC_3N and the volume density of H_2 are found by minimizing χ^2, which is defined as
χ^2 = ∑_i=1^N_l[ (I_calc - I_obs)/σ]^2,
where the sum extends over the N_l lines available, I_calc and I_obs are the calculated and observed velocity-integrated brightness temperatures, and σ are the uncertainties in I_obs, which include the error given by the Gaussian fit and the calibration error of 10 %. To evaluate the goodness of the fit, we use the reduced χ^2, which is defined as χ^2_red = χ^2_min/(N_l-p), where χ^2_min is the minimum value of χ^2 and p is the number of free parameters. Typically, a value of χ^2_red ≲ 1 indicates a good quality of the fit. In this case we have p = 2 because there are two free parameters, the column density of the ^13C isotopologue of HC_3N and the H_2 volume density. Errors in these two parameters are given as 1 σ, where for p = 2, the 1 σ level (68 % confidence) corresponds to χ^2+2.3. The same statistical analysis is adopted in Sect. <ref> when studying molecular anions and their neutral counterparts through the LVG method. In some cases in which the number of lines is small or the H_2 density is poorly constrained, the H_2 volume density is kept fixed. In those cases p = 1 and the 1 σ error (68 % confidence) in the column density is given by χ^2+1.0.
In Fig. <ref> we show the results for TMC-1 CP. In this starless core the H_2 volume density is well constrained by the four available lines of the three ^13C isotopologues of HC_3N to a narrow range of (0.9-1.1) × 10^4 cm^-3 with very low values of χ^2_red. We adopt as H_2 density in TMC-1 CP the arithmetic mean of the values derived for the three isotopologues, i.e., 1.0 × 10^4 cm^-3 (see Table <ref>). Similar calculations allow to derive H_2 volume densities of 1.8 × 10^4 cm^-3 for Lupus-1A, 5.6 × 10^4 cm^-3 for L483, and a lower limit of 10^5 cm^-3 for L1527 (see Table <ref>). The value for L483 is of the same order than those derived in the literature, 3.4 × 10^4 cm^-3 from the model of <cit.> and 3 × 10^4 cm^-3, from either NH_3 <cit.> or CH_3OH <cit.>. For L1495B we could only retrieve data for one of the ^13C isotopologues of HC_3N, HCC^13CN, from which we derive a H_2 density of 1.6 × 10^4 cm^-3 (see Table <ref>). In the case of L1521F, ^13C isotopologues of HC_3N were not available and thus we used lines of HCCNC, adopting the collision rate coefficients calculated by <cit.>, to derive a rough estimate of the H_2 volume density of 1 × 10^4 cm^-3 (see Table <ref>). Higher H_2 densities, in the range (1-5) × 10^5 cm^-3, are derived for L1521F from N_2H^+ and N_2D^+ <cit.>, probably because these molecules trace the innermost dense regions depleted in CO.
For the remaining sources we adopted H_2 volume densities from the literature (see Table <ref>). For L1544 we adopted a value of 2 × 10^4 cm^-3 from the analysis of SO and SO_2 lines by <cit.>. This H_2 density is in agreement with the range of values, (1.5-4.0) × 10^4 cm^-3, found by <cit.> in their excitation analysis of HCCNC and HNC_3. Note that H_2 volume densities toward the dust peak are larger than 10^6 cm^-3. However, as shown by <cit.>, the emission of C_4H probes the outer shells and thus a density of a few 10^4 cm^-3 is appropriate for our calculations toward the CH_3OH peak. In the cases of L1251A, L1512, L1172, L1389, and TMC-1 C, we adopted the H_2 densities from the analysis of HC_3N lines by <cit.>. The reliability of the H_2 volume densities derived by these authors is supported by the fact that the densities they derive for TMC-1 CP and L1495B, 1.0 × 10^4 cm^-3 and 1.1 × 10^4 cm^-3, respectively, are close to the values determined in this study from ^13C isotopologues of HC_3N (see Table <ref>).
In spite of the different evolutionary status of the 12 anion-containing clouds, the gas kinetic temperatures and H_2 volume densities at the scales proven by the Yebes 40m, IRAM 30m, and GBT telescopes are not that different. Gas temperatures are restricted to the very narrow range 9-14 K, while H_2 densities are in the range (1.0-7.5) × 10^4 cm^-3, at the exception of L1527 which has an estimated density in excess of 10^5 cm^-3 (see Table <ref>).
§ EXCITATION OF ANIONS: GENERAL CONSIDERATIONS
One may expect that given the large dipole moments of molecular anions, as high as 10.4 D in the case of C_8H^- <cit.>, the rotational levels should be populated out of thermodynamic equilibrium in cold dark clouds. This is not always the case as it will be shown here. To get insight into the excitation of negative molecular ions in interstellar clouds we run non-LTE calculations under the LVG formalism adopting typical parameters of cold dark clouds, i.e., a gas kinetic temperature of 10 K, a column density of 10^11 cm^-2 (of the order of the values typically derived for anions in cold dark clouds; see references in Sect. <ref>), and a linewidth of 0.5 km s^-1 (see Table <ref>), and we varied the volume density of H_2 between 10^3 and 10^6 cm^-3. The sets of rate coefficients for inelastic collisions with H_2 adopted are summarized in Table <ref>. In those cases in which only collisions with He are available we scaled the rate coefficients by multiplying them by the square root of the ratio of the reduced masses of the H_2 and He colliding systems. When inelastic collisions for ortho and para H_2 are available, we adopted a ortho-to-para ratio of H_2 of 10^-3.
In Fig. <ref> we show the calculated excitation temperatures (T_ ex) of lines of molecular anions as a function of the quantum number J of the upper level and the H_2 volume density. The different panels correspond to different anions and show the regimes in which lines are either thermalized (T_ ex ∼ 10 K) of subthermally excited (T_ ex < 10 K). To interpret these results it is useful to think in terms of the critical density, which for a given rotational level can be evaluated as the ratio of the de-excitation rates due to spontaneous emission and due to inelastic collisions (e.g., ). Collision rate coefficients for transitions with Δ J = -1 or -2, which are usually the most efficient, are of the order of 10^-10 cm^3 s^-1 at a temperature of 10 K for the anions for which calculations have been carried out (see Table <ref>). The Einstein coefficient for spontaneous emission depends linearly on the square of the dipole moment and the cube of the frequency. Therefore, the critical density (and thus the degree of departure from LTE) is very different depending on the dipole moment of the anion and on the frequency of the transition. Regarding the dependence of the critical density on the dipole moment, C_2H^- and CN^- have a similar weight, and thus their low-J lines, which are the ones observable for cold clouds, have similar frequencies. However, these two anions have quite different dipole moments, 3.1 and 0.65 Debye, respectively <cit.>, which make them to show a different excitation pattern. As seen in Fig. <ref>, the low-J lines of CN^- are in LTE at densities above 10^5 cm^-3 while those of C_2H^- require much higher H_2 densities to be in LTE. With respect to the dependence of the critical density with frequency, as one moves along the series of increasing weight C_2H^- → C_4H^- → C_6H^- or CN^- → C_3N^- → C_5N^- (see Fig. <ref>), the most favorable lines for detection in cold clouds (those with upper level energies around 10 K) shift to lower frequencies, which make the Einstein coefficients, and thus the critical densities, to decrease. That is, the lines of anions targeted by radiotelescopes are more likely to be thermalized for heavy anions than for light ones (see the higher degree of thermalization when moving from lighter to heavier anions in Fig. <ref>).
The volume densities of H_2 in cold dark clouds are typically in the range 10^4-10^5 cm^-3 (see Table <ref>). Therefore, if C_2H^- is detected in a cold dark cloud at some point in the future, the most favorable line for detection, the J = 1-0, would be most likely subthermally excited, making necessary to use the collision rate coefficients to derive a precise abundance. In the case of a potential future detection of CN^- in a cold interstellar cloud, the J = 1-0 line would be in LTE only if the H_2 density of the cloud is ≥ 10^5 cm^-3 and out of LTE for lower densities (see Fig. <ref>). The medium-sized anions C_4H^- and C_3N^- are predicted to have their Q band lines more or less close to LTE depending on whether the H_2 density is closer to 10^5 or to 10^4 cm^-3, while the lines in the 3 mm band are likely to be subthermally excited unless the H_2 density is above 10^5 cm^-3 (see Fig. <ref>). For the heavier anions C_6H^- and C_5N^-, the lines in the K band are predicted to be thermalized at the gas kinetic temperature, while those in the Q band may or may not be thermalized depending on the H_2 density (see Fig. <ref>). Comparatively, the Q band lines of C_5N^- are more easily thermalized than those of C_6H^- because C_5N^- has a smaller dipole moment than C_6H^-. We note that the results concerning C_5N^- have to be taken with caution because we used the collision rate coefficients calculated for C_6H^- in the absence of specific collision data for C_5N^- (see Table <ref>). We did similar calculations for C_8H^-, C_10H^-, and C_7N^- (not shown) using the collision rate coefficients of C_6H^-. We find that the lines in a given spectral range deviate more from thermalization as the size of the anion increases. In the K band, the lines of C_6H^- and C_5N^- are thermalized, while those of C_10H^- become subthermally excited at low densities, around 10^4 cm^-3. In the Q band the deviation from thermalization is even more marked for these large anions.
In summary, non-LTE calculations are particularly important to derive accurate abundances for anions when just one or two lines are detected and these lie in a regime of subthermal excitation, as indicated in Fig. <ref>. This becomes critical, in order of decreasing importance, for C_2H^-, CN^-, C_4H^-, C_3N^-, C_6H^-, C_8H^-, and C_5N^- (for the three latter only if observed at frequencies above 30 GHz). The drawback is that the H_2 volume density must be known with a good precision if one aims at determining the anion column density accurately with only one or two lines.
In the case of the neutral counterparts of molecular anions, collision rate coefficients have been calculated for C_6H and C_3N with He as collider <cit.>. We thus carried out LVG calculations similar to those presented before for anions. In this case we adopt a higher column density of 10^12 cm^-2, in line with typical values in cold dark clouds (see references in Sect. <ref>). The results are shown in Fig. <ref>. It is seen that in the case of C_3N, the excitation pattern is similar to that of the corresponding anion, C_3N^-, shown in Fig. <ref>. The thermalization of C_3N occurs at densities somewhat higher compared to C_3N^-, mainly because the collision rate coefficients calculated for C_3N with He <cit.> are smaller than those computed for C_3N^- with para H_2 <cit.>. We note that this conclusion may change if the collision rate coefficients of C_3N with H_2 are significantly larger than the factor of 1.39 due to the change in the reduced mass when changing He by H_2. In the case of C_6H however the excitation behavior is very different to that of C_6H^- (compare C_6H^- in Fig. <ref> with C_6H in Fig. <ref>). The rotational levels of the radical are much more subthermally excited than those of the corresponding anion, with a difference in the critical density of about a factor of 30. This is a consequence of the much smaller collision rate coefficients calculated for C_6H with He <cit.> compared to those calculated for C_6H^- with para H_2 <cit.>, a difference that is well beyond the factor of 1.40 due to the change in the reduced mass when changing He by H_2.
§ ANION ABUNDANCES
We evaluated the column densities of molecular anions and their corresponding neutral counterparts in the 12 studied sources by carrying out LVG calculations similar to those described in Sect. <ref> for the ^13C isotopologues of HC_3N. We used the collision rate coefficients given in Table <ref>. Gas kinetic temperatures and linewidths were fixed to the values given in Table <ref>, the ortho-to-para ratio of H_2, when needed, was fixed to 10^-3, and both the column density of the species under study and the H_2 volume density were varied. The best estimates for these two parameters were found by minimization of χ^2 (see Sect. <ref>). In addition, to evaluate the rotational temperature, and thus the level of departure from LTE, and to have an independent estimate of the column density, we constructed rotation diagrams.
The LVG method should provide a more accurate determination of the column density than the rotation diagram, as long as the collision rate coefficients with para H_2 and the gas kinetic temperature are accurately known. If an independent determination of the H_2 volume density is available from some density tracer (in our case the ^13C isotopologues of HC_3N are used in several sources), a good agreement between the values of n(H_2) obtained from the species under study and from the density tracer supports the reliability of the LVG analysis. We note that densities do not need to be similar if the species studied and the density tracer are distributed over different regions, although in our case we expect similar distributions for HC_3N, molecular anions, and their neutral counterparts, as long as all them are carbon chains. A low value of χ^2_red, typically ≲ 1, is also indicative of the goodness of the LVG analysis. If the quality of the LVG analysis is not satisfactory or the collision rate coefficients are not accurate, a rotation diagram may still provide a good estimate of the column density if the number of detected lines is high enough and they span a wide range of upper level energies. Therefore, a high number of detected lines makes likely to end up with a correct determination of the column density. On the other hand, if only one or two lines are detected, the accuracy with which the column density can be determined relies heavily on whether the H_2 volume density, in the case of an LVG calculation, or the rotational temperature, in the case of the rotation diagram, are known with some confidence.
In Table <ref> we present the results from the LVG analysis and the rotation diagram for all molecular anions detected in cold dark clouds and for the corresponding neutral counterparts, and compare the column densities derived with values from the literature, when available. In general, the column densities derived through the rotation diagram agree within 50 %, with those derived by the LVG analysis. The sole exceptions are C_8H in TMC-1 CP and C_6H in TMC-1 C. In the former case, the lack of specific collision rate coefficients for C_8H probably introduces an uncertainty in the determination of the column density. In the case of C_6H in TMC-1 C, the suspected problem in the collision rate coefficients used for C_6H (see below) is probably behind the too large column density derived by the LVG method.
We first discuss the excitation and abundance analyses carried out for negative ions. For the anions detected in TMC-1 CP through more than two lines, i.e., C_6H^-, C_8H^-, C_3N^-, and C_5N^-, the quality of the LVG analysis is good (in Fig. <ref> we show the case of C_3N^-). First, the number of lines available is sufficiently high and they cover a wide range of upper level energies. Second, the values of χ^2_ red are ≲ 1. And third, the H_2 densities derived are on the same order (within a factor of two) of that obtained through ^13C isotopologues of HC_3N. The rotational temperatures derived by the rotation diagram indicate subthermal excitation, which is consistent with the H_2 densities derived and the excitation analysis presented in Sect. <ref>. We note that the column densities derived by the rotation diagram are systematically higher, by ∼ 50 %, compared to those derived through the LVG analysis. These differences are due to the breakdown of various assumptions made in the frame of the rotation diagram method, mainly the assumption of a uniform excitation temperature across all transitions and the validity of the Rayleigh-Jeans limit. Only the assumption that exp(hν/kT_ex) - 1 = hν/kT_ex, implicitly made by the rotation diagram method in the Rayleigh-Jeans limit, already implies errors of 10-20 % in the determination of the column density for these anions. We therefore adopt as preferred values for the column densities those derived through the LVG method and assign an uncertainty of 15 %, which is the typical statistical error in the determination of the column density by the LVG analysis. The recommended values are given in Table <ref>. Based on the same arguments, we conclude that the LVG analysis is satisfactory for C_6H^- and C_5N^- in Lupus-1A , C_6H^- and C_4H^- in L1527, and C_6H^- in L483, and thus adopt the column densities derived by the LVG method with the same estimated uncertainty of 15 % (see Table <ref>). In other cases the LVG analysis is less reliable due to a variety of reasons: only one or two lines are available (C_4H^- in TMC-1 CP, C_8H^- and C_3N^- in Lupus-1A, C_4H^- in L483, and C_6H^- in the clouds L1521F, L1251A, L1512, L1172, L1389, and TMC-1 C), the parameter χ^2_ red is well above unity (C_4H^- in Lupus-1A), or the column density has a sizable error (C_6H^- in L1495B and L1544). In those cases we adopt the column densities derived by the LVG method but assign a higher uncertainty of 30 % (values are given in Table <ref>).
In order to derive anion-to-neutral abundance ratios, we applied the same analysis carried out for the anions to the corresponding neutral counterparts. We first focus on the radical C_6H. There is one striking issue in the LVG analysis carried out for this species: the H_2 volume densities derived through C_6H are systematically higher, by 1-2 orders of magnitude, than those derived through the ^13C isotopologues of HC_3N (see Fig. <ref>). This fact, together with the previous marked difference in the excitation pattern compared to that of C_6H^- discussed in Sect. <ref>, suggests that the collision coefficients adopted for C_6H, which are based on the C_6H – He system studied by <cit.>, are too small. A further problem when using the collision coefficients of <cit.> is that the line intensities from the ^2Π_1/2 state, which in TMC-1 CP are around 100 times smaller than those of the ^2Π_3/2 state, are overestimated by a factor of ∼ 10. All these issues indicate that it is worth to undertake calculations of the collision rate coefficients of C_6H with H_2. The suspected problem in the collision rate coefficients of C_6H make us to adopt a conservative uncertainty of 30 % in the column densities derived. Moreover, in those sources in which C_6H is observed through just a few lines (L1521F, L1251A, L1512, L1172, L1389, and TMC-1 C) we need to fix the H_2 density to the values derived through other density tracer (see Table <ref>), and given the marked difference between the H_2 densities derived through C_6H and other density tracers, it is likely that the C_6H column densities derived by the LVG method are unreliable. In these cases we therefore adopted as preferred C_6H column densities those obtained from the rotation diagram (see Table <ref>). For the other neutral radicals, we adopted the column densities derived by the LVG method with an estimated uncertainty of 15 % when the LVG analysis was satisfactory (C_3N and C_5N in TMC-1 CP, C_4H, C_3N, and C_5N in Lupus-1A, and C_4H in L1527) and a higher uncertainty of 30 % otherwise (C_4H and C_8H in TMC-1 CP, C_8H in Lupus-1A, and C_4H in L483).
The recommended column densities for molecular anions and their neutral counterparts, and the corresponding anion-to-neutral ratios, are given in Table <ref>. Since the lines of a given anion and its corresponding neutral counterpart where in most cases observed simultaneously, we expect the error due to calibration to cancel when computing anion-to-neutral ratios. We therefore subtracted the 10 % error due to calibration in the column densities when computing errors in the anion-to-neutral ratios. In general, the recommended anion-to-neutral abundance ratios agree within 50 % with the values reported in the literature, when available. Higher differences, of up to a factor of two, are found for C_6H^- in L1527 and L1495B and for C_5N^- in TMC-1 CP. The most drastic differences are found for the C_4H^-/C_4H abundance ratio, for which we derive values much higher than those reported in the literature. The differences are largely due to the fact that here we adopt a revised value of the dipole moment of C_4H (2.10 D; ), which is significantly higher than the value of 0.87 D calculated by <cit.> and adopted in previous studies. This fact makes the column densities of C_4H to be revised downward by a factor of ∼ 6, and consequently the C_4H^-/C_4H ratios are also revised upward by the same factor.
§ DISCUSSION
Having at hand a quite complete observational picture of negative ions in the interstellar medium, as summarized in Table <ref>, it is interesting to examine which lessons can be learnt from this. There are at least two interesting aspects to discuss. First, how do the anion-to-neutral abundance ratio behave from one source to another, and whether the observed variations can be related to some property of the cloud. And second, within a given source, how do the anion-to-neutral abundance ratio vary for the different anions, and whether this can be related to the formation mechanism of anions.
Regarding the first point, since C_6H^- is the most widely observed anion, it is very convenient to focus on it to investigate the source-to-source behavior of negative ions. The detection of C_6H^- in L1527 and the higher C_6H^-/C_6H ratio derived in that source compared to that in TMC-1 CP led <cit.> to suggest that this was a consequence of the higher H_2 density in L1527 compared to TMC-1 CP. This point was later on revisited by <cit.> with a larger number of sources detected in C_6H^-. These authors found a trend in which the C_6H^-/C_6H ratio increases with increasing H_2 density and further argued that this ratio increases as the cloud evolves from quiescent to star-forming, with ratios below 3 % in quiescent sources and above that value in star-forming ones.
There are theoretical grounds that support a relationship between the C_6H^-/C_6H ratio and the H_2 density. Assuming that the formation of anions is dominated by radiative electron attachment to the neutral counterpart and that they are mostly destroyed through reaction with H atoms, as expected for the conditions of cold dense clouds <cit.>, it can be easily shown that at steady state the anion-to-neutral abundance ratio is proportional to the abundance ratio between electrons and H atoms, which in turn is proportional to the square root of the H_2 volume density (e.g., ). That is,
C_6H^-/C_6H∝e^-/H∝n(H_2)^1/2.
In Fig. <ref> we plot the observed C_6H^-/C_6H ratio as a function of the H_2 density for the 12 clouds where this anion has been detected. This is an extended and updated version of Figure 5 of <cit.>, where we superimpose the theoretical trend expected according to Eq. (<ref>). In general terms, the situation depicted by Fig. <ref> is not that different from that found by <cit.>. The main difference concerns L1495B, for which we derive a higher C_6H^-/C_6H ratio, 3.0 % instead of 1.4 %. Our value should be more accurate, given the larger number of lines used here. Apart from that, the C_6H^-/C_6H ratio tends to be higher in those sources with higher H_2 densities, which tend to be more evolved. This behavior is similar to that found by <cit.>. The data points in Fig. <ref> seem to be consistent with the theoretical expectation. We however caution that there is substantial dispersion in the data points. Moreover, the uncertainties in the anion-to-neutral ratios, together with those affecting the H_2 densities (not shown), make it difficult to end up with a solid conclusion on whether or not observations follow the theoretical expectations. If we restrict to the five best characterized sources (TMC-1 CP, Lupus-1A, L1527, L483, and L1495B), all them observed in C_6H^- through four or more lines and studied in the H_2 density in a coherent way, then the picture is such that all sources, regardless of its H_2 density, have similar C_6H^-/C_6H ratios, at the exception of L1527, which remains the only data point supporting the theoretical relation between anion-to-neutral ratio and H_2 density. It is also worth noting that when looking at C_4H^-, L1527 shows also an enhanced anion-to-neutral ratio compared to TMC-1 CP, Lupus-1A, and L483. Further detections of C_6H^- in sources with high H_2 densities, preferably above 10^5 cm^-3, should help to shed light on the suspected relation between anion-to-neutral ratio and H_2 density. This however may not be easy because chemical models predict that, although the C_6H^-/C_6H ratio increases with increasing H_2 density, an increase in the density also brings a decrease in the column density of both C_6H and C_6H^- <cit.>.
The second aspect that is worth to discuss is the variation of the anion-to-neutral ratio for different anions within a given source. Unlike the former source-to-source case, where variations were small (a factor of two at most), here anion-to-neutral ratios vary by orders of magnitude, i.e., well above uncertainties. Figure <ref> summarizes the observational situation of interstellar anions in terms of abundances relative to their neutral counterpart. The variation of the anion-to-neutral ratios across different anions is best appreciated in TMC-1 CP and Lupus-1A, which stand out as the two most prolific sources of interstellar anions. The lowest anion-to-neutral ratio is reached by far for C_4H^-, while the highest values are found for C_5N^- and C_8H^-. We caution that the C_5N^-/C_5N ratio could have been overestimated if the true dipole moment of C_5N is a mixture between those of the ^2Σ and ^2Π states, as discussed by <cit.>, in a case similar to that studied for C_4H by <cit.>. For the large anion C_7N^-, the anion-to-neutral ratio is not known in TMC-1 CP but it is probably large, as suggested by the detection of the lines of the anion and the non detection of the lines of the neutral <cit.>. In the case of the even larger anion C_10H^-, the anion is found to be even more abundant than the neutral in TMC-1 CP by a factor of two, although this result has probably an important uncertainty since the detection is done by line stack <cit.>. Moreover, it is yet to be confirmed that the species identified is C_10H^- and not C_9N^- <cit.>. In any case, a solid conclusion from the TMC-1 CP and Lupus-1A data shown in Fig. <ref> is that when looking at either the hydrocarbon series of anions or at the nitrile series, the anion-to-neutral ratio clearly increases with increasing size. The most straitforward interpretation of this behavior is related to the formation mechanism originally proposed by <cit.>, which relies on the radiative electron attachment (REA) to the neutral counterpart and for which the rate coefficient is predicted to increase markedly with increasing molecular size.
If electron attachment is the dominant formation mechanism of anions and destruction rates are similar for all anions, we expect the anion-to-neutral abundance ratio to be proportional to the rate coefficient of radiative electron attachment. That is,
A^-/A∝k_REA,
where A^- and A are the anion and its corresponding neutral counterpart, respectively, and k_ REA is the rate coefficient for radiative electron attachment to A.
To get insight into this relation we plot in Fig. <ref> the rate coefficients calculated for the reactions of electron attachment forming the different anions on a scale designed on purpose to visualize if observed anion-to-neutral ratios scale with calculated electron attachment rates. We arbitrarily choose C_6H^- as the reference for the discussion. If we first focus on the largest anion C_8H^-, we see that the C_8H^-/C_8H ratios are systematically higher, by a factor of 2-3, than the C_6H^-/C_6H ones, while <cit.> calculate identical electron attachment rates for C_6H and C_8H. Similarly, the C_5N^-/C_5N ratios are higher, by a factor of 6-8 than the C_6H^-/C_6H ratios, while the electron attachment rate calculated for C_5N is twice of that computed for C_6H in the theoretical scenario of <cit.>. That is, for the large anions C_8H^- and C_5N^- there is a deviation of a factor of 2-4 from the theoretical expectation given by Eq. (<ref>). This deviation is small given the various sources of uncertainties in both the observed anion-to-neutral ratio (mainly due to uncertainties in the dipole moments) and the calculated electron attachment rate coefficient. The situation is different for the medium size anions C_4H^- and C_3N^-. In the case of C_4H^-, anion-to-neutral ratios are ∼ 100 times lower than for C_6H^-, while the electron attachment rate calculated for C_4H is just ∼ 6 times lower than that computed for C_6H. The deviation from Eq. (<ref>) of a factor ∼ 20, which is significant, is most likely due to the electron attachment rate calculated for C_4H by <cit.> being too large. In the case of C_3N^-, the observed anion-to-neutral ratios are 4-6 times lower than those derived for C_6H^-, while the electron attachment rate calculated by <cit.> for C_3N is 300 times lower than that computed for C_6H by <cit.>. Here the deviation is as large as two orders of magnitude and it is probably caused by the too low electron attachment rate calculated for C_3N. In summary, calculated electron attachment rates are consistent with observed anion-to-neutral ratios for the large species but not for the medium-sized species C_4H and C_3N, in which cases calculated rates are too large by a factor of ∼ 20 and too small by a factor of ∼ 100, respectively.
Of course, the above conclusion holds in the scenario of anion formation dominated by electron attachment and similar destruction rates for all anions, which may not be strictly valid. For example, it has been argued <cit.> that the process of radiative electron attachment is much less efficient than calculated by <cit.>, with rate coefficients that are too small to sustain the formation of anions in interstellar space. <cit.> discuss this point making the difference between direct and indirect radiative electron attachment, where for long carbon chains the direct process would be slow, corresponding to the rates calculated by <cit.>, while the indirect process could be fast if a long-lived superexcited anion is formed, something that has some experimental support. <cit.> conclude that there are enough grounds to support rapid electron attachment to large carbon chains, as calculated by <cit.>. The formation mechanism of anions through electron attachment is very selective for large species and thus has the advantage of naturally explaining the marked dependence of anion-to-neutral ratios with molecular size illustrated in Fig. <ref>, something that would be difficult to explain through other formation mechanism. Indeed, mechanisms such as dissociative electron attachment to metastable isomers such as HNC_3 and H_2C_6 <cit.> or reactions of H^- with polyynes and cyanopolyynes <cit.> could contribute to some extent but are unlikely to control the formation of anions since they can hardly explain why large anions are far more abundant than small ones.
§ CONCLUSIONS
We reported new detections of molecular anions in cold dense clouds and considerably expanded the number of lines through which negative ions are detected in interstellar clouds. The most prevalent anion remains to be C_6H^-, which to date has been seen in 12 interstellar clouds, while the rest of interstellar anions are observed in just 1-4 sources.
We carried out excitation calculations, which indicate that subthermal excitation is common for the lines of interstellar anions observed with radiotelescopes, with the low frequency lines of heavy anions being the easiest to thermalize. Important discrepancies between calculations and observations are found for the radical C_6H, which suggest that the collision rate coefficients currently available for this species need to be revisited.
We analyzed all the observational data acquired here and in previous studies through non-LTE LVG calculations and rotation diagrams to constrain the column density of each anion in each source. Differences in the anion-to-neutral abundance ratios with respect to literature values are small, less than 50 % in general and up to a factor of two in a few cases. The highest difference is found for the C_4H^-/C_4H ratio, which is shifted upward with respect to previous values due to the adoption of a higher dipole moment for the radical C_4H.
The observational picture of interstellar anions brought by this study shows two interesting results. On the one side, the C_6H^-/C_6H ratio seems to be higher in clouds with a higher H_2 density, which is usually associated to a later evolutionary status of the cloud, although error bars make it difficult to clearly distinguish this trend. On the other hand, there is a very marked dependence of the anion-to-neutral ratio with the size of the anion, which is in line with the formation scenario involving radiative electron attachment, the theory of which must still be revised for medium size species such as C_4H and C_3N.
We acknowledge funding support from Spanish Ministerio de Ciencia e Innovación through grants PID2019-106110GB-I00, PID2019-107115GB-C21, and PID2019-106235GB-I00.
[Agúndez et al.(2008)]Agundez2008 Agúndez, M., Cernicharo, J., Guélin, M., et al. 2008, , 478, L19
[Agúndez et al.(2010)]Agundez2010 Agúndez, M., Cernicharo, J., Guélin, M., et al. 2010, , 517, L2
[Agúndez et al.(2015)]Agundez2015 Agúndez, M., Cernicharo, J., & Guélin, M. 2015, , 577, L5
[Agúndez et al.(2019)]Agundez2019 Agúndez, M., Marcelino, N., Cernicharo, J., et al. 2019, , 625, A147
[Agúndez et al.(2022)]Agundez2022 Agúndez, M., Marcelino, N., Cabezas, C., et al. 2022, , 657, A96
[Agúndez et al.(2023)]Agundez2023 Agúndez, M., Roncero, O., Marcelino, N., et al. 2023, , in press
[Alexander(1982)]Alexander1982 Alexander, M. H. 1982, , 76, 5974
[Alexander et al.(1986)]Alexander1986 Alexander, M. H., Smedley, J. E., & Corey, G. C. 1986, , 84, 3049
[Anglada et al.(1997)]Anglada1997 Anglada, G., Sepúlveda, I., & Gómez, J. F. 1997, , 121, 255
[Bacmann et al.(2002)]Bacmann2002 Bacmann, A., Lefloch, B., Ceccarelli, C., et al. 2002, , 389, L6
[Balança et al.(2021)]Balanca2021 Balança, C., Quintas-Sánchez, E., Dawes, R., et al. 2021, , 508, 1148
[Biswas et al.(2023)]Biswas2023 Biswas, R., Giri, K., González-Sánchez, L. et al. 2023, , 522, 5775
[Blanksby et al.(2001)]Blanksby2001 Blanksby, S. J., McAnoy, A. M., Dua, S., & Bowie, J. H. 2001, , 328, 89
[Bop et al.(2021)]Bop2021 Bop, C. T., Lique, F., Faure, A., et al. 2021, , 501, 1911
[Bop et al.(2022)]Bop2022 Bop, C. T., Desrousseaux, B., & Lique, F. 2022, , 662, A102
[Botschwina et al.(1995)]Botschwina1995 Botschwina, P., Seeger, S., Mladenovic, M., et al. 1995, , 14, 169
[Botschwina(2000)]Botschwina2000 Botschwina, P. 2000, 55th Ohio Symposium on Molecular Spectroscopy, TC06
[Botschwina & Oswald(2008)]Botschwina2008 Botschwina, P. & Oswald, R. 2008, , 129, 044305
[Brünken et al.(2007a)]Brunken2007a Brünken, S., Gupta, H., Gottlieb, C. A., et al. 2007a, , 664, L43
[Brünken et al.(2007b)]Brunken2007b Brünken, S., Gottlieb, C. A., Gupta, H., et al. 2007b, , 464, L33
[Cabezas et al.(2021)]Cabezas2021 Cabezas, C., Agúndez, M., Marcelino, N., et al. 2021, , 654, A45
[Cabezas et al.(2022)]Cabezas2022 Cabezas, C., Agúndez, M., Marcelino, N., et al. 2022, , 657, L4
[Carelli et al.(2013)]Carelli2013 Carelli, F., Satta, M., Grassi, T., & Gianturco, F. A. 2013, , 774, 97
[Cernicharo et al.(2007)]Cernicharo2007 Cernicharo, J., Guélin, M., Agúndez, M., et al. 2007, , 467, L37
[Cernicharo et al.(2008)]Cernicharo2008 Cernicharo, J., Guélin, M., Agúndez, M., et al. 2008, , 688, L83
[Cernicharo et al.(2012)]Cernicharo2012 Cernicharo, J., Marcelino, N., Roueff, E., et al. 2012, , 759, L43
[Cernicharo et al.(2020)]Cernicharo2020 Cernicharo, J., Marcelino, N., Pardo, J. R., et al. 2020, , 641, L9
[Cernicharo et al.(2021)]Cernicharo2021 Cernicharo, J., Agúndez, M., Kaiser, R. I., et al. 2021, , 652, L9
[Cernicharo et al.(2023a)]Cernicharo2023a Cernicharo, J., Pardo, J. R., Cabezas, C., et al. 2023a, , 670, L19
[Cernicharo et al.(2023b)]Cernicharo2023b Cernicharo, J., Tercero, B., Marcelino, N., et al. 2023b, , submitted
[Codella et al.(1997)]Codella1997 Codella, C., Welser, R., Henkel, C., et al. 1997, , 324, 203
[Cordiner et al.(2011)]Cordiner2011 Cordiner, M. A., Charnley, S. B., Buckle, J. V., et al. 2011, , 730, L18
[Cordiner & Charnley(2012)]Cordiner2012 Cordiner, M. A. & Charnley, S. B. 2012, , 749, 120
[Cordiner et al.(2013)]Cordiner2013 Cordiner, M. A., Buckle, J. V., Wirström, E. S., et al. 2013, , 770, 48
[Crapsi et al.(2005)]Crapsi2005 Crapsi, A., Caselli, P., Walmsley, C. M., et al. 2005, , 619, 379
[Douguet et al.(2015)]Douguet2015 Douguet, N., Fonseca dos Santos, S., Raoult, M., et al. 2015, , 142, 234309
[Dumouchel et al.(2012)]Dumouchel2012 Dumouchel, F., Spielfiedel, A., Senent, M. L., & Feautrier, N. 2012, , 533, 6
[Dumouchel et al.(2023)]Dumouchel2023 Dumouchel, F., Quintas-Sánchez, E., Balança, C., et al. 2023, , 158, 164307
[Faure et al.(2016)]Faure2016 Faure, A., Lique, A., & Wiesenfeld, L. 2016, , 460, 2103
[Fehér et al.(2016)]Feher2016 Fehér, O., Tóth, L. V., Ward-Thompson, D., et al. 2016, , 590, A75
[Flower et al.(2006)]Flower2006 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2006, , 449, 621
[Flower et al.(2007)]Flower2007 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2007, , 474, 923
[Forer et al.(2023)]Forer2023 Forer, J., Kokoouline, V., & Stoecklin, T. 2023, , 107, 043117
[Fossé et al.(2001)]Fosse2001 Fossé, D., Cernicharo, J., Gerin, M., & Cox, P. 2001, , 552, 168
[Franz et al.(2020)]Franz2020 Franz, J., Mant, B. P., González-Sánchez, L., et al. 2020, , 152, 234303
[Frayer et al.(2018)]Frayer2018 Frayer, D. T., Ghigo, F., & Maddalena, R. J. 2018, GBT Memo #301
[Gianturco et al.(2016)]Gianturco2016 Gianturco, F. A., Satta, M., Mendolicchio, M., et al. 2016, , 830, 2
[Gianturco et al.(2019)]Gianturco2019 Gianturco, F. A., González-Sánchez, L., Mant, B. P., & Wester, R. 2019, , 151, 144304
[González-Sánchez et al.(2020)]Gonzalez-Sanchez2020 González-Sánchez, L., Mant, B. P., Wester, R., & Gianturco, F. A. 2020, , 897, 75
[Gottlieb et al.(2007)]Gottlieb2007 Gottlieb, C. A., Brünken, S., McCarthy, M. C., & Thaddeus, P. 2007, , 126, 191101
[Gupta et al.(2007)]Gupta2007 Gupta, H., Brünken, S., Tamassia, F., et al. 2007, , 655, L57
[Gupta et al.(2009)]Gupta2009 Gupta, H., Gottlieb, C. A., McCarthy, M. C., & Thaddeus, P. 2009, , 691, 1494
[Harada & Herbst(2008)]Harada2008 Harada, N. & Herbst, E. 2008, , 685, 272
[Herbst(1981)]Herbst1981 Herbst, E. 1981, , 289, 656
[Herbst & Osamura(2008)]Herbst2008 Herbst, E. & Osamura, Y. 2008, , 679, 1670
[Jiménez-Serra et al.(2016)]Jimenez-Serra2016 Jiménez-Serra, I., Vasyunin, A. I., Caselli, P., et al. 2016, , 830, L6
[Jørgensen et al.(2002)]Jorgensen2002 Jørgensen, J. K., Schöier, F. L., & van Dishoeck, E. F. 2002, , 389, 908
[Khamesian et al.(2016)]Khamesian2016 Khamesian, M., Douguet, N., Fonseca dos Santos, S., et al. 2016, , 117, 123001
[Kłos & Lique(2011)]Klos2011 Kłos, J. & Lique, F. 2011, , 418, 271
[Kołos et al.(2008)]Kolos2008 Kołos, R., Gronowski, M., & Botschwina, P. 2008, , 128, 154305
[Lara-Moreno et al.(2017)]Lara-Moreno2017 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2017, , 467, 4174
[Lara-Moreno et al.(2019)]Lara-Moreno2019 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2019, , 486, 414
[Lara-Moreno et al.(2021)]Lara-Moreno2021 Lara-Moreno, M., Stoecklin, T., & Halvick, P. 2021, , 507, 4086
[McCarthy et al.(1995)]McCarthy1995 McCarthy, M. C., Gottlieb, C. A., Thaddeus, P., et al. 1995, , 103, 7820
[McCarthy et al.(2006)]McCarthy2006 McCarthy, M. C., Gottlieb, C. A., Gupta, H., & Thaddeus, P. 2006, , 652, L141
[Marcelino et al.(2007)]Marcelino2007 Marcelino, N., Cernicharo, J., Agúndez, M., et al. 2007, , 665, L127
[Martínez et al.(2010)]Martinez2010 Martínez Jr., O., Yang, Z., Demarais, N. J., et al. 2010, , 720, 173
[Millar et al.(2017)]Millar2017 Millar, T. J., Walsh, C., & Field, T. A. 2017, , 117, 1765
[Murakami et al.(2022)]Murakami2022 Murakami, T., Iida, R., Hashimoto, Y., et al. 2022, , 126, 9244
[Oyama et al.(2020)]Oyama2020 Oyama, T., Ozaki, H., Sumiyoshi, Y., et al. 2020, , 890, 39
[Pardo et al.(2023)]Pardo2023 Pardo, J. R., Cabezas, C., Agúndez, M., et al. 2023, , submitted
[Petrie & Herbst(1997)]Petrie1997 Petrie, S. & Herbst, E. 1997, , 491, 210
[Punanova et al.(2018)]Punanova2018 Punanova, A., Caselli, P., Feng, S., et al. 2018, , 855, 112
[Remijan et al.(2007)]Remijan2007 Remijan, A. J., Hollis, J. M., Lovas, F. J., et al. 2007, , 664, L47
[Remijan et al.(2023)]Remijan2023 Remijan, A., Scolati, H. N., Burkhardt, A. M., et al. 2023, , 944, L45
[Sakai et al.(2007)]Sakai2007 Sakai, N., Sakai, T., Osamura, Y., & Yamamoto, S. 2007, , 667, L65
[Sakai et al.(2008)]Sakai2008 Sakai, N., Sakai, T., Hirota, T., & Yamamoto, S. 2008, , 672, 371
[Sakai et al.(2010)]Sakai2010 Sakai, N., Shiino, T., Hirota, T., et al. 2010, , 718, L49
[Senent et al.(2019)]Senent2019 Senent, M. L., Dayou, F., Dumouchel, F., et al. 2019, , 486, 422
[Spezzano et al.(2017)]Spezzano2017 Spezzano, S., Caselli, P., Bizzocchi, L., et al. 2017, , 606, A82
[Suzuki et al.(1992)]Suzuki1992 Suzuki, H., Yamamoto, S., Ohishi, M., et al. 1992, , 392, 551
[Tafalla et al.(2002)]Tafalla2002 Tafalla, M., Myers, P. C., Caselli, P., et al. 2002, , 569, 815
[Tchakoua et al.(2018)]Tchakoua2018 Tchakoua, T., Motapon, O., & Nsangou, M. 2018, , 51, 045202
[Tercero et al.(2021)]Tercero2021 Tercero, F., López-Pérez, J. A., Gallego, J. D., et al. 2021, , 645, A37
[Thaddeus et al.(2008)]Thaddeus2008 Thaddeus, P., Gottlieb, C. A., Gupta, H., et al. 2008, , 677, 1132
[Toumi et al.(2021)]Toumi2021 Toumi, I., Yazidi, O., & Najar, F. 2021, , 11, 13579
[Vastel et al.(2018)]Vastel2018 Vastel, C., Quénard, D., Le Gal, R., et al. 2018, , 478, 5514
[Visser et al.(2002)]Visser2002 Visser, A. E., Richer, J. S., & Chandler, C. J. 2002, , 124, 2756
[Vuitton et al.(2009)]Vuitton2009 Vuitton, V., Lavvas, P., Yelle, R. V., et al. 2009, , 57, 1558
[Walker et al.(2016)]Walker2016 Walker, K. M., Dumouchel, F., Lique, F., & Dawes, R. 2016, , 145, 024314
[Walker et al.(2017)]Walker2017 Walker, K. M., Lique, F., Dumouchel, F., & Dawes, R. 2017, , 466, 831
[Walker et al.(2018)]Walker2018 Walker, K. M., Lique, F., & Dawes, R. 2018, , 473, 1407
[Walsh et al.(2009)]Walsh2009 Walsh, C., Harada, N., Herbst, E., Millar, T. J. 2009, , 700, 752
[Woon(1995)]Woon1995 Woon, D. E. 1995, , 244, 45
[Yoshida et al.(2019)]Yoshida2019 Yoshida, K., Sakai, N., Nishimura, Y., et al. 2019, , 71, S18
§ SUPPLEMENTARY TABLE
lcc@c@cccc@c@ll
Observed line parameters of molecular anions in interstellar clouds.
1lSpecies 1cTransition 1cFrequency 1cV_ LSR 1cΔ v 1cT_A^* peak ^a 1c∫ T_A^* dv ^a Telescope Reference
1c 1c 1c(MHz) 1c(km s^-1) 1c(km s^-1) 1c(mK) 1c(mK km s^-1)
continued.
1lSpecies 1cTransition 1cFrequency 1cV_ LSR 1cΔ v 1cT_A^* peak ^a 1c∫ T_A^* dv ^a Telescope Reference
1c 1c 1c(MHz) 1c(km s^-1) 1c(km s^-1) 1c(mK) 1c(mK km s^-1)
11cTMC-1 CP
C_6H^- 4-3 11014.896 +5.80(2) 0.38(4) 25(3) 10.1(33) GBT <cit.>
5-4 13768.614 +5.80(11) 0.44(7) 24(3) 11.2(43) GBT <cit.>
10-9 27537.130 2*{ 2*41.6(90) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
12-11 33044.488 +5.78(1) 0.73(1) 22.3(23) 17.4(18) Yebes 40m This work
13-12 35798.153 +5.78(1) 0.70(1) 20.9(22) 15.5(17) Yebes 40m This work
14-13 38551.808 +5.78(1) 0.64(2) 18.9(20) 12.8(14) Yebes 40m This work
15-14 41305.453 +5.79(2) 0.56(3) 17.2(19) 10.3(12) Yebes 40m This work
16-15 44059.085 +5.79(2) 0.57(3) 12.8(15) 7.7(10) Yebes 40m This work
17-16 46812.706 +5.81(2) 0.59(4) 9.6(13) 6.0(8) Yebes 40m This work
18-17 49566.313 +5.84(3) 0.56(5) 5.4(10) 3.2(5) Yebes 40m This work
C_4H^- 2-1 18619.761 +5.70(5) 0.43(13) 1.0(3) ^b, d GBT <cit.>
4-3 37239.410 +5.81(2) 0.71(2) 6.0(7) 4.5(6) Yebes 40m This work
5-4 46549.156 +5.81(2) 0.55(3) 5.8(8) 3.4(4) Yebes 40m This work
C_8H^- 11-10 12833.460 +5.71(5) 0.36(4) 8(1) 3.1(10) GBT <cit.>
12-11 14000.134 +5.86(5) 0.37(4) 7(1) 2.8(10) GBT <cit.>
13-12 15166.806 +5.84(6) 0.45(4) 6(1) 2.9(10) GBT <cit.>
16-15 18666.814 +5.80(7) 0.34(5) 10(2) 3.6(16) GBT <cit.>
27-26 31500.029 +5.82(4) 0.63(10) 1.28(28) 0.86(20) Yebes 40m This work
28-27 32666.670 +5.76(3) 0.76(6) 1.08(26) 0.87(15) Yebes 40m This work
29-28 33833.309 +5.90(12) 0.68(17) 0.78(19) 0.56(18) Yebes 40m This work
30-29 34999.944 +5.86(6) 0.60(10) 0.87(20) 0.56(14) Yebes 40m This work
31-30 36166.576 +5.83(8) 0.32(20) 1.01(24) 0.34(10) Yebes 40m This work
32-31 37333.205 +5.73(5) 0.66(11) 0.87(23) 0.61(16) Yebes 40m This work
33-32 38499.831 +5.81(9) 0.82(17) 0.68(20) 0.60(18) Yebes 40m This work
34-33 39666.453 +5.93(10) 0.40(12) 0.44(21) 0.19(7) ^e Yebes 40m This work
C_3N^- 4-3 38812.797 +5.78(1) 0.88(2) 4.2(2) 3.9(5) Yebes 40m This work
5-4 48515.872 +5.86(2) 0.61(4) 6.3(9) 4.1(6) Yebes 40m This work
8-7 77624.540 +5.88(3) 0.52(8) 7.1(17) 3.9(9) IRAM 30m This work
10-9 97029.687 +5.77(4) 0.38(6) 2.7(8) 1.1(3) IRAM 30m This work
C_5N^- 12-11 33332.570 +5.83(1) 0.71(3) 6.5(7) 4.9(6) Yebes 40m This work
13-12 36110.238 +5.80(1) 0.64(2) 6.1(7) 4.1(5) Yebes 40m This work
14-13 38887.896 +5.81(1) 0.63(2) 6.5(8) 4.4(5) Yebes 40m This work
15-14 41665.541 +5.82(2) 0.58(2) 5.7(7) 3.5(5) Yebes 40m This work
16-15 44443.173 +5.79(2) 0.56(2) 4.7(6) 2.8(4) Yebes 40m This work
17-16 47220.793 +5.81(2) 0.50(4) 3.6(6) 1.9(3) Yebes 40m This work
11cLupus-1A
C_6H^- 7-6 19276.037 +5.046(8) 0.16(2) 85(8) ^b 14(2) ^b GBT <cit.>
8-7 22029.741 +5.034(10) 0.17(2) 94(11) ^b 15(3) ^b GBT <cit.>
12-11 33044.488 +5.06(2) 0.59(3) 30.1(37) 18.9(24) Yebes 40m This work
13-12 35798.153 +5.08(2) 0.51(3) 32.9(40) 17.8(25) Yebes 40m This work
14-13 38551.808 +5.05(2) 0.48(4) 30.4(38) 15.7(20) Yebes 40m This work
15-14 41305.453 +5.09(3) 0.40(7) 32.7(42) 13.8(19) Yebes 40m This work
16-15 44059.085 +5.07(3) 0.55(6) 24.2(35) 14.2(22) Yebes 40m This work
17-16 46812.706 +5.10(6) 0.51(8) 17.1(33) 9.3(18) Yebes 40m This work
C_4H^- 4-3 37239.410 +5.078(13) 0.34(3) 59(5) ^b 19(5) ^b GBT <cit.>
4-3 37239.410 +5.04(4) 0.78(7) 7.4(14) 6.1(11) Yebes 40m This work
5-4 46549.156 +5.05(9) 0.45(12) 9.8(27) 4.7(13) Yebes 40m This work
9-8 83787.297 +5.23(6) 0.47(12) 10.4(31) 5.3(13) IRAM 30m This work
C_8H^- 16-15 18666.814 2*{ 2*+5.014(11) 2*0.09(3) 2*35(9) 2*4(1) ^b, c 2*} 2*GBT 2*<cit.>
18-17 21000.145
C_3N^- 4-3 38812.797 +5.16(15) 0.96(15) 2.8(10) 2.8(9) Yebes 40m This work
C_5N^- 12-11 33332.570 +5.11(7) 0.50(9) 8.4(16) 4.4(10) Yebes 40m This work
13-12 36110.238 +5.11(7) 0.44(9) 6.5(13) 3.1(7) Yebes 40m This work
14-13 38887.896 +5.13(7) 0.64(8) 8.0(17) 5.4(11) Yebes 40m This work
15-14 41665.541 +5.14(9) 0.37(10) 9.2(19) 3.7(9) Yebes 40m This work
16-15 44443.173 +5.09(10) 0.58(15) 6.1(18) 3.8(11) Yebes 40m This work
11cL1527
C_6H^- 7-6 19276.037 +5.93(9) 0.45(11) 14(3) ^b 7(2) ^b GBT <cit.>
8-7 22029.741 +5.89(3) 0.49(10) 26(4) ^b 18(4) ^b GBT <cit.>
12-11 33044.488 +5.90(5) 0.85(10) 9.6(14) 8.6(16) Yebes 40m This work
13-12 35798.153 +5.85(4) 0.60(4) 11.4(20) 7.3(18) Yebes 40m This work
14-13 38551.808 +5.84(3) 0.61(5) 12.0(18) 7.8(12) Yebes 40m This work
15-14 41305.453 +5.90(3) 0.60(4) 16.4(25) 10.4(19) Yebes 40m This work
16-15 44059.085 +5.90(3) 0.52(4) 14.5(23) 8.0(16) Yebes 40m This work
17-16 46812.706 +5.83(5) 0.58(8) 11.1(23) 6.8(14) Yebes 40m This work
C_4H^- 4-3 37239.410 +5.92(12) 0.80(20) 3.2(10) 2.7(7) Yebes 40m This work
5-4 46549.156 +6.05(15) 0.73(15) 4.9(19) 3.8(13) Yebes 40m This work
9-8 83787.297 +5.80(3) 0.62(9) 13(2) 8(1) IRAM 30m <cit.>
10-9 93096.550 +5.90(4) 0.59(9) 11(2) 7(1) IRAM 30m <cit.>
11cL483
C_6H^- 12-11 33044.488 +5.38(6) 0.66(8) 4.9(11) 3.4(8) Yebes 40m This work
13-12 35798.153 +5.33(5) 0.70(7) 5.8(10) 4.3(8) Yebes 40m This work
14-13 38551.808 +5.33(5) 0.78(7) 5.2(9) 4.3(9) Yebes 40m This work
15-14 41305.453 +5.29(6) 0.46(9) 5.3(12) 2.6(6) Yebes 40m This work
16-15 44059.085 +5.24(10) 0.75(12) 4.8(12) 3.8(10) Yebes 40m This work
17-16 46812.706 +5.34(7) 0.63(9) 5.0(14) 3.4(9) Yebes 40m This work
C_4H^- 4-3 37239.410 +5.39(8) 0.73(12) 2.8(7) 2.2(5) Yebes 40m This work
5-4 46549.156 +5.37(10) 0.44(15) 2.7(12) 1.3(5) ^e Yebes 40m This work
11cL1495B
C_6H^- 10-9 27537.130 2*{ 2*9.6(20) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
12-11 33044.488 +7.66(5) 0.80(7) 5.9(12) 5.0(9) Yebes 40m This work
13-12 35798.153 +7.65(5) 0.50(8) 5.8(12) 3.1(6) Yebes 40m This work
14-13 38551.808 +7.58(7) 0.39(10) 4.3(11) 1.8(4) Yebes 40m This work
15-14 41305.453 +7.66(10) 0.36(14) 6.6(16) 2.6(6) Yebes 40m This work
16-15 44059.085 +7.61(8) 0.49(12) 4.1(11) 2.1(6) Yebes 40m This work
11cL1544
2*C_6H^- 2*7-6 2*19276.037 ^e 2*{ +7.08(3) 0.16(3) 16(2) 2*6.0(18) 2*} 2*GBT 2*<cit.>
+7.30(3) 0.13(3) 26(2)
12-11 33044.488 +7.11(13) 0.67(28) 4.5(16) 3.2(14) Yebes 40m This work
13-12 35798.153 +7.04(10) 0.48(16) 4.1(12) 2.1(9) Yebes 40m This work
14-13 38551.808 +6.98(8) 0.50(13) 6.0(16) 3.2(12) Yebes 40m This work
15-14 41305.453 +7.34(18) 0.76(36) 4.6(15) 3.7(16) Yebes 40m This work
11cL1521F
2*C_6H^- 2*7-6 2*19276.037 ^e 2*{ +6.33(5) 0.18(3) 17(2) 2*7.0(17) 2*} 2*GBT 2*<cit.>
+6.64(5) 0.35(9) 9(2)
11cL1251A
C_6H^- 10-9 27537.130 2*{ 2*6.5(17) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cL1512
C_6H^- 10-9 27537.130 2*{ 2*4.3(8) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cL1172
C_6H^- 10-9 27537.130 2*{ 2*6.7(15) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cL1389
C_6H^- 10-9 27537.130 2*{ 2*5.9(14) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
11cTMC-1 C
C_6H^- 10-9 27537.130 2*{ 2*13.6(25) ^b, c 2*} 2*GBT 2*<cit.>
11-10 30290.813
^a Unless otherwise stated, the intensity scale is antenna temperature (T_A^*). It can be converted to main beam brightness temperature (T_ mb) by dividing by B_ eff/F_ eff, where B_ eff is the main beam efficiency and F_ eff is the telescope forward efficiency. For the Yebes 40m telescope in the Q band B_ eff = 0.797 exp[-(ν(GHz)/71.1)^2] and F_ eff = 0.97 (), for the IRAM 30m telescope B_ eff = 0.871 exp[-(ν(GHz)/359)^2] and F_ eff = 0.95 (), and for the GBT telescope we adopt F_ eff = 1.0 and B_ eff = 1.32 × 0.71 exp[-(ν(GHz)/103.7)^2] <cit.>. The error in ∫ T_A^* dv includes the contributions from the Gaussian fit and from calibration (assumed to be 10 %). ^b Intensity scale is T_ mb. ^c Average of two lines. ^d Line neglected in the analysis. Intensity should be ∼ 3 times larger to be consistent with the other lines.
^e Line detected marginally.
llccll
Observed velocity-integrated line intensities of neutral counterparts of molecular anions in interstellar clouds.
1lSpecies 1cTransition 1cFrequency (MHz) 1c∫ T_A^* dv (mK km s^-1) ^a Telescope Reference
continued.
1lSpecies 1cTransition 1cFrequency (MHz) 1c∫ T_A^* dv (mK km s^-1) ^a Telescope Reference
6cTMC-1 CP
C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 133(24) ^b GBT <cit.>
^2Π_3/2 J=15/2-13/2 b 20794.475 112(22) ^b GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 332.4(420) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 175.6(176) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 173.5(175) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 158.9(160) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 158.5(160) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 141.5(180) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 141.1(175) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 119.3(149) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 118.6(147) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 93.4(106) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 93.3(106) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 73.0(98) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 73.4(99) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 52.6(73) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 52.2(70) Yebes 40m This work
C_4H N=2-1 J=3/2-1/2 19054.476 411.3(418) ^b GBT <cit.>
N=4-3 J=9/2-7/2 38049.654 1369(138) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 1007(102) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 1094(111) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 864(87) Yebes 40m This work
N=9-8 J=19/2-17/2 85634.010 417(53) IRAM 30m <cit.>
N=9-8 J=17/2-15/2 85672.580 386(49) IRAM 30m <cit.>
N=10-9 J=21/2-19/2 95150.393 251(26) IRAM 30m This work
N=10-9 J=19/2-17/2 95188.947 243(26) IRAM 30m This work
N=11-10 J=23/2-21/2 104666.568 111(12) IRAM 30m This work
N=11-10 J=21/2-19/2 104705.108 105(13) IRAM 30m This work
N=12-11 J=25/2-23/2 114182.523 60(8) IRAM 30m This work
N=12-11 J=23/2-21/2 114221.023 47(6) IRAM 30m This work
C_8H ^2Π_3/2 J=53/2-51/2 a 31093.035 6.0(7) Yebes 40m This work
^2Π_3/2 J=53/2-51/2 b 31093.415 4.4(6) Yebes 40m This work
^2Π_3/2 J=55/2-53/2 a 32266.325 4.3(6) Yebes 40m This work
^2Π_3/2 J=55/2-53/2 b 32266.735 4.2(6) Yebes 40m This work
^2Π_3/2 J=57/2-55/2 a 33439.612 3.5(5) Yebes 40m This work
^2Π_3/2 J=57/2-55/2 b 33440.052 3.4(6) Yebes 40m This work
^2Π_3/2 J=59/2-57/2 b 34613.367 2.7(3) Yebes 40m This work
^2Π_3/2 J=61/2-59/2 a 35786.176 3.0(4) Yebes 40m This work
^2Π_3/2 J=61/2-59/2 b 35786.679 2.4(3) Yebes 40m This work
^2Π_3/2 J=63/2-61/2 a 36959.452 2.3(3) Yebes 40m This work
^2Π_3/2 J=63/2-61/2 b 36959.989 2.2(3) Yebes 40m This work
^2Π_3/2 J=65/2-63/2 a 38132.725 1.7(2) Yebes 40m This work
^2Π_3/2 J=65/2-63/2 b 38133.297 1.5(2) Yebes 40m This work
^2Π_3/2 J=67/2-65/2 a 39305.995 1.4(2) Yebes 40m This work
^2Π_3/2 J=67/2-65/2 b 39306.602 1.4(2) Yebes 40m This work
^2Π_3/2 J=69/2-67/2 a 40479.260 1.2(2) Yebes 40m This work
^2Π_3/2 J=69/2-67/2 b 40479.904 1.2(2) Yebes 40m This work
^2Π_3/2 J=71/2-69/2 a 41652.522 0.8(1) Yebes 40m This work
^2Π_3/2 J=71/2-69/2 b 41653.203 0.9(1) Yebes 40m This work
^2Π_3/2 J=73/2-71/2 a 42825.779 0.7(1) Yebes 40m This work
^2Π_3/2 J=73/2-71/2 b 42826.499 0.7(1) Yebes 40m This work
C_3N N=4-3 J=9/2-7/2 39571.347 332(34) Yebes 40m This work
N=4-3 J=7/2-5/2 39590.181 240(25) Yebes 40m This work
N=5-4 J=11/2-9/2 49466.421 244(25) Yebes 40m This work
N=5-4 J=9/2-7/2 49485.224 198(20) Yebes 40m This work
N=9-8 J=19/2-17/2 89045.583 64.2(73) IRAM 30m This work
N=9-8 J=17/2-15/2 89064.347 58.6(68) IRAM 30m This work
N=10-9 J=21/2-19/2 98940.087 28.1(36) IRAM 30m This work
N=10-9 J=19/2-17/2 98958.770 22.7(30) IRAM 30m This work
N=11-10 J=23/2-21/2 108834.254 11.6(24) IRAM 30m This work
N=11-10 J=21/2-19/2 108853.012 21.2(35) IRAM 30m This work
C_5N N=12-11 J=25/2-23/2 33668.234 5.6(7) Yebes 40m This work
N=12-11 J=23/2-21/2 33678.966 5.9(7) Yebes 40m This work
N=13-12 J=27/2-25/2 36474.308 5.8(7) Yebes 40m This work
N=13-12 J=25/2-23/2 36485.042 5.5(7) Yebes 40m This work
N=14-13 J=29/2-27/2 39280.369 5.1(7) Yebes 40m This work
N=14-13 J=27/2-25/2 39291.105 5.0(7) Yebes 40m This work
N=15-14 J=31/2-29/2 42086.415 4.7(6) Yebes 40m This work
N=15-14 J=29/2-27/2 42097.151 4.4(6) Yebes 40m This work
N=16-15 J=33/2-31/2 44892.444 4.6(6) Yebes 40m This work
N=16-15 J=31/2-29/2 44903.182 4.4(6) Yebes 40m This work
N=17-16 J=35/2-33/2 47698.457 3.7(5) Yebes 40m This work
N=17-16 J=33/2-31/2 47709.196 3.4(5) Yebes 40m This work
6cLupus-1A
C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 114(14) ^b GBT <cit.>
^2Π_3/2 J=15/2-13/2 b 20794.475 131(16) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 150.3(166) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 153.1(163) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 151.6(161) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 150.0(159) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 140.3(143) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 141.0(148) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 126.2(134) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 124.8(130) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 115.5(123) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 114.9(123) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 90.7(125) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 91.3(128) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 73.6(109) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 66.9(103) Yebes 40m This work
C_4H N=4-3 J=9/2-7/2 38049.654 1219(123) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 921(94) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 1123(114) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 846(86) Yebes 40m This work
N=8-7 J=17/2-15/2 76117.439 1124(114) IRAM 30m This work
N=8-7 J=15/2-13/2 76156.028 1024(104) IRAM 30m This work
N=9-8 J=19/2-17/2 85634.010 779(83) IRAM 30m This work
N=9-8 J=17/2-15/2 85672.580 730(77) IRAM 30m This work
N=11-10 J=23/2-21/2 104666.568 349(39) IRAM 30m This work
N=11-10 J=21/2-19/2 104705.108 334(38) IRAM 30m This work
C_8H ^2Π_3/2 J=33/2-31/2 a 19359.975 10(2) ^b GBT <cit.>
^2Π_3/2 J=33/2-31/2 b 19360.123 9(2) ^b GBT <cit.>
C_3N N=4-3 J=9/2-7/2 39571.347 251(30) Yebes 40m This work
N=4-3 J=7/2-5/2 39590.181 175(19) Yebes 40m This work
N=5-4 J=11/2-9/2 49466.421 177(19) Yebes 40m This work
N=5-4 J=9/2-7/2 49485.224 138(15) Yebes 40m This work
N=9-8 J=19/2-17/2 89045.583 141.5(150) IRAM 30m This work
N=9-8 J=17/2-15/2 89064.347 126.7(136) IRAM 30m This work
N=10-9 J=21/2-19/2 98940.087 74.6(83) IRAM 30m This work
N=10-9 J=19/2-17/2 98958.770 66.0(74) IRAM 30m This work
C_5N N=12-11 J=25/2-23/2 33668.234 4.5(12) Yebes 40m This work
N=12-11 J=23/2-21/2 33678.966 7.0(14) Yebes 40m This work
N=13-12 J=27/2-25/2 36474.308 4.8(11) Yebes 40m This work
N=13-12 J=25/2-23/2 36485.042 5.7(11) Yebes 40m This work
N=14-13 J=29/2-27/2 39280.369 7.8(24) Yebes 40m This work
N=14-13 J=27/2-25/2 39291.105 5.7(15) Yebes 40m This work
N=15-14 J=31/2-29/2 42086.415 4.1(9) Yebes 40m This work
N=15-14 J=29/2-27/2 42097.151 4.8(11) Yebes 40m This work
N=16-15 J=33/2-31/2 44892.444 3.2(9) Yebes 40m This work
N=16-15 J=31/2-29/2 44903.182 1.8(8) ^d Yebes 40m This work
6cL1527
C_6H ^2Π_3/2 J=15/2-13/2 a 20792.907 24(5) ^b GBT <cit.>
^2Π_3/2 J=15/2-13/2 b 20794.475 21(5) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 34.8(75) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 26.0(59) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 29.3(34) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 31.8(37) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 31.7(46) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 32.2(51) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 32.7(50) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 32.3(48) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 30.2(47) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 31.1(49) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 30.5(48) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 31.3(49) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 27.3(48) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 26.9(47) Yebes 40m This work
C_4H N=4-3 J=9/2-7/2 38049.654 388(39) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 295(30) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 434(44) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 347(35) Yebes 40m This work
N=9-8 J=19/2-17/2 85634.010 747(86) IRAM 30m <cit.>
N=9-8 J=17/2-15/2 85672.580 712(82) IRAM 30m <cit.>
N=11-10 J=23/2-21/2 104666.568 542(64) IRAM 30m <cit.>
N=11-10 J=21/2-19/2 104705.108 487(59) IRAM 30m <cit.>
N=12-11 J=25/2-23/2 114182.523 462(59) IRAM 30m <cit.>
N=12-11 J=23/2-21/2 114221.023 406(53) IRAM 30m <cit.>
6cL483
C_6H ^2Π_3/2 J=23/2-21/2 a 31881.860 29.4(34) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 31.0(36) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 28.4(32) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 27.7(31) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 26.2(29) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 26.2(30) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 24.4(28) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 23.2(27) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 19.7(23) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 20.4(24) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 13.6(22) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 14.2(21) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 13.0(23) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 13.7(24) Yebes 40m This work
C_4H N=4-3 J=9/2-7/2 38049.654 470(48) Yebes 40m This work
N=4-3 J=7/2-5/2 38088.461 356(36) Yebes 40m This work
N=5-4 J=11/2-9/2 47566.792 439(50) Yebes 40m This work
N=5-4 J=9/2-7/2 47605.496 352(36) Yebes 40m This work
N=8-7 J=17/2-15/2 76117.439 375(38) IRAM 30m This work
N=8-7 J=15/2-13/2 76156.028 337(35) IRAM 30m This work
N=9-8 J=19/2-17/2 85634.010 272(27) IRAM 30m <cit.>
N=9-8 J=17/2-15/2 85672.580 249(24) IRAM 30m <cit.>
N=10-9 J=21/2-19/2 95150.393 157(15) IRAM 30m <cit.>
N=10-9 J=19/2-17/2 95188.947 147(14) IRAM 30m <cit.>
N=11-10 J=23/2-21/2 104666.568 110(10) IRAM 30m <cit.>
N=11-10 J=21/2-19/2 104705.108 100(9) IRAM 30m <cit.>
N=12-11 J=25/2-23/2 114182.523 64(6) IRAM 30m <cit.>
N=12-11 J=23/2-21/2 114221.023 64(6) IRAM 30m <cit.>
6cL1495B
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 55(10) ^c GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 55(10) ^c GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 141.6(164) ^b GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 51.9(59) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 47.9(53) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 46.8(52) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 45.4(51) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 42.8(49) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 a 40198.323 36.2(42) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 37.7(42) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 33.3(40) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 33.6(40) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 a 45742.519 24.7(38) Yebes 40m This work
^2Π_3/2 J=33/2-31/2 b 45750.052 24.0(35) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 a 48514.584 19.4(32) Yebes 40m This work
^2Π_3/2 J=35/2-33/2 b 48523.044 18.6(33) Yebes 40m This work
6cL1544
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 51(11) GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 50(11) GBT <cit.>
^2Π_3/2 J=23/2-21/2 a 31881.860 23.8(36) Yebes 40m This work
^2Π_3/2 J=23/2-21/2 b 31885.541 30.0(44) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 a 34654.037 25.7(39) Yebes 40m This work
^2Π_3/2 J=25/2-23/2 b 34658.383 31.6(48) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 a 37426.192 23.3(36) Yebes 40m This work
^2Π_3/2 J=27/2-25/2 b 37431.255 19.9(34) Yebes 40m This work
^2Π_3/2 J=29/2-27/2 b 40204.157 18.0(31) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 a 42970.432 13.6(26) Yebes 40m This work
^2Π_3/2 J=31/2-29/2 b 42977.089 12.1(23) Yebes 40m This work
6cL1521F
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 36(10) GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 26(9) GBT <cit.>
6cL1251A
C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 36(8) GBT <cit.>
^2Π_3/2 J=21/2-19/2 b 29112.730 35(8) GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 43.6(65) ^b GBT <cit.>
6cL1512
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 20(7) ^c GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 20(7) ^c GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 27(5) GBT <cit.>
^2Π_3/2 J=21/2-19/2 b 29112.730 28(5) GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 26.3(35) ^b GBT <cit.>
6cL1172
C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 41.1(57) ^b GBT <cit.>
6cL1389
C_6H ^2Π_3/2 J=13/2-11/2 a 18020.606 10(6) ^c GBT <cit.>
^2Π_3/2 J=13/2-11/2 b 18021.783 10(6) ^c GBT <cit.>
^2Π_3/2 J=21/2-19/2 a 29109.658 27.1(40) ^b GBT <cit.>
6cTMC-1 C
C_6H ^2Π_3/2 J=21/2-19/2 a 29109.658 88.1(105) ^b GBT <cit.>
^a Unless otherwise stated, the intensity scale is antenna temperature (T_A^*). It can be converted to main beam brightness temperature (T_ mb) by dividing by B_ eff/F_ eff (see caption of Table <ref>. The error in ∫ T_A^* dv includes the contributions from the Gaussian fit and from calibration (assumed to be 10 %). ^b Intensity scale is T_ mb. ^c Intensity distributed equally among the two fine components. ^d Marginal detection.
|
http://arxiv.org/abs/2307.04717v1 | 20230710172826 | Biomass dust explosions: CFD simulations and venting experiments in a 1 m$^3$ silo | [
"A. Islas",
"A. Rodríguez-Fernández",
"C. Betegón",
"E. Martínez-Pañeda",
"A. Pandal"
] | physics.flu-dyn | [
"physics.flu-dyn"
] |
1
.001
Islas et al.
mode = title]Biomass dust explosions: CFD simulations and venting experiments in a 1 m3 silo
1]Alain Islas[]
[1]Department of Energy, University of Oviedo - 33203 Gijón, Asturias, Spain
1]Andrés Rodríguez Fernández[]
2]Covadonga Betegón[
]
[2]Department of Construction and Manufacturing Engineering, University of Oviedo - 33203 Gijón, Asturias, Spain
3]Emilio Martínez-Pañeda[]
[3]Department of Civil and Environmental Engineering, Imperial College London - London, SW7 2AZ, United Kingdom
1]Adrián Pandal[orcid=0000-0001-6006-2199]
[1]
[email protected]
[cor1]Corresponding author:
This study presents CFD simulations of biomass dust explosions in a newly developed experimental 1 m3 silo apparatus with variable venting, designed and fabricated to operate similarly to the explosivity test standards. The aim of the study is to validate a CFD model under development and investigate its capability to capture the transient effects of a vented explosion. The model is based on OpenFOAM and solves the multiphase (gas-particle) flow using an Eulerian-Lagrangian approach in a two-way regime. It considers the detailed thermochemical conversion of biomass, including moisture evaporation, devolatilization, and char oxidation, along with the homogeneous combustion of gases, turbulence, and radiative heat transfer. The explosion is analyzed in all stages, i.e., dust cloud dispersion, ignition, closed explosion, and vented explosion. The results indicate excellent agreement between the CFD model and experimental tests throughout the sequence. Our findings highlight the critical role of particle size in dust cloud distribution and pre-ignition turbulence, which significantly influences flame dynamics and the explosion itself. This model shows great promise and encourages its application for future investigations of biomass dust explosions in larger-scale geometries, especially in venting situations that fall out of the scope of the NFPA 68 or EN 14491 standards, and to help design effective safety measures to prevent such incidents.
Vented dust explosions Biomass CFD OpenFOAM
[
[
=====
§ INTRODUCTION
As global efforts to achieve net zero emissions by 2050 continue, the demand for bio-energy has increased, making biomass combined heat & power (CHP) an attractive option for greenhouse gas (GHG) abatement due to its CO2 neutral characteristics <cit.> and potential to become carbon negative if combined with carbon capture and storage (CCS) <cit.>. From biomass co-firing to dedicated routes, long-term fuel delivery concepts and contracts are essential for the successful operation of biomass power plants <cit.>. However, the supply and availability of feedstock must be carefully considered to ensure continuous and stable power generation <cit.>. Unfortunately, experience has shown that dust explosions are a potential hazard that must be addressed through the implementation of necessary precautions to guarantee safe and reliable plant operation, particularly during fuel handling and storing phases <cit.>.
Dust explosions are a significant threat in power plants and other industrial facilities, posing a peril to worker safety and property damage <cit.>. These explosions can occur in a range of equipment, from silos and mills to conveyors and dust collection systems, and can be triggered by various sources, including hot surfaces, electrical sparks, and self-heating processes <cit.>. While prevention and inherent safety measures are the primary means of reducing the hazards of dust explosions <cit.>, it is often necessary to implement operational and dynamic risk assessments to better comprehend the probability of occurrence, the potential severity of dust explosions, and to look for mitigation solutions such as venting panels <cit.>. However, determining the appropriate vent size remains a controversial issue <cit.>, despite the existence of established standards, e.g., the EN 14491 or the NFPA 68 codes <cit.>.
Likewise, as newly built plants scale up, more cost-efficient, high-volume storage solutions are needed to secure continuing plant operation. Although mammoth silos may seem an attractive option <cit.>, they often fall outside the scope of these standards, highlighting the need for further research.
To address these challenges, besides the traditional dust explosion testing activities <cit.>, modeling research <cit.> has emerged as an alternative to predict the consequences of dust explosions with reduced labor and capital. Especially, computational fluid dynamics (CFD) simulations can play a meaningful role in assessing risk analysis, providing a more nuanced understanding of explosion development and designing mitigation systems beyond the simplified scenarios considered by guidelines and standards. These tools have been successfully applied to the study of various aspects related to dust explosions, including dust cloud formation <cit.> and the determination of the explosion severity parameters <cit.>. However, the practical application of CFD codes to large industrial settings requires a pragmatic approach that involves a compromise with accuracy and precision, and initial validation through repeated small-scale experiments is necessary <cit.>.
In this paper, we aim to contribute to safety engineering and consequence analysis by presenting the next step in our efforts to develop a reliable computational tool for simulating dust explosions in industrial equipment. Specifically, we validate the performance of our previous CFD model <cit.> by revisiting it and conducting experiments on dust explosion venting. To do this, we designed a self-made silo with a capacity of 1 m3 that features adjustable venting, and we used it to perform biomass dust explosions. We based our test procedure on the EN 14034 <cit.> and ASTM E1226 <cit.> standards, and we constructed the silo based on the design references for standardized test vessels, including the 20L Siwek sphere and 1 m3 ISO chamber.
The purpose of this study is to gather experimental data and use it to validate our CFD model's ability to capture the transient behavior that occurs during the different stages of a dust explosion. These stages include: (1) dust cloud dispersion, (2) ignition, (3) pressure development and flame propagation, and (4) pressure relief. Our end goals for this research are two-fold: (1) to enhance the potential of CFD codes to accurately simulate biomass dust explosions and (2) to improve the accuracy of vent sizing calculations to reduce the risk of dust explosions in industrial settings.
§ MATERIALS AND METHODS
§.§ Experimental setup
A 1 m3, pressure-resistant silo with adjustable venting was designed in partnership with PHB Weserhütte S.A. and the R&D center IDONIAL in Asturias, Spain. The silo was manufactured in carbon steel ASME SA516-GR70, resistant up to 5 bar g overpressure and its dimensions were scaled down from a typical design of large-scale silo ( >10,000 m3). The bottom is beveled to imitate a hopper design and the roof is cone-shaped. The vent openings consist of 8 hinged hatches (198x185 mm), distributed equiangularly on the roof, see Fig. <ref>. The venting area varies between 0.036 to 0.293 m2 representing up to a total venting efficiency of 30.64%. The opening is regulated by polymer bolts specifically designed to withstand a 570 mbar g overpressure. The sizing of these bolts was calculated based on the tensile strength of the material, which required drilling the threaded shank to obtain the appropriate cross-section.
To create the dust cloud inside the test vessel,
a system of pressurized air injection was employed for dispersing the dust sample into the silo. This system consists of a dust canister that
has a volume of 5 L and a length-to-diameter ratio L/D=3.6. Its lower part is cone-shaped to facilitate dust outflow. As in the standards, the canister is pressurized up to 20 bar g, but the 1 m3 silo is vacuumed to -0.125 bar g prior the start of the dispersion process. This condition is important to ensure that the normal pressure at the start of the deflagration test is exactly 0.0 bar g. The air discharge is controlled by a Nordair® U150 electropneumatic valve with an ATEX II 2GD actuator. The ignition delay time t_d was set to 600 ms in all the experiments, matching the value used for the tests in the 1 m3 ISO chamber <cit.>.
To favor the radial spread of the dust, a new nozzle was designed. Specifically, an axisymmetric version of the traditional rebound nozzle <cit.> was manufactured and installed at the bottom of the silo.
The dust cloud is ignited by means of 2x5 kJ Sobbe® chemical igniters placed right above the dispersion nozzle and upheld by two slender rods. The igniters are fired oppositely at an angle of 45^∘ with respect to the horizontal. The pressure reading is recorded by two pressure transmitters Siemens® Sitrans P320 positioned at the top and on opposite extremes of the cylindrical walls. The control and data acquisition system consists of a programmable logic controller (PLC) Siemens® Simatic HMI and a videocamera. All the tests were conducted in the experimental test site of Applus+ TST® (Tunnel Safety Testing, S.A.) in Asturias, Spain.
§.§ Dust sample
The aim of this research is to study dust explosions with a representative sample found in industrial processes that manipulate pellets. The test sample is a commercial biomass from a local pellet manufacturer in Asturias, Spain, and is comprised of natural wood sub-products (saw dust, wood chips and debarked wood). The commercial pellets were received in a 15 kg bag format, whose percentage of fine particulates (d<1 mm) was less than 1%, see Fig. <ref>. As only a few grams could be used for the explosion tests, the pellets were ground in a gently-rotating ball mill to generate additional combustible dust.
The particle size distribution (PSD) of both the raw fine particulates in the bag and the post-milling samples was determined by Sieve analysis. The cumulative distributions and other size statistics are shown in Fig. <ref>.
As noted, the PSD generated by pellet milling contains slightly more fine particle diameters than the PSD of the raw dust. However, the difference is small as indicated by the polydispersity index. The polydispersity index σ_D is a measure of the breadth in a size distribution <cit.> and is calculated as
σ_D=D_90-D_10/D_50
where the median D_50 and the D_10 and the D_90 values represent the 50%, 10% or 90% point in the cumulative undersize PSD, respectively. A polydispersity index σ_d≪ 1 indicates a high homogeneity in particle size or a narrow PSD, while σ_D≫ 1 represents a heterogeneous particle size or a broad PSD <cit.>. Considering that in industrial applications, the dust particles can vary in size largely, the polydispersity indices of both the original and ground PSDs are comparable. Moreover, the two PSD cover the same order of magnitude and the median varies in ∼ 5%. So, for practical purposes the former size distribution is considered to be representative of typical transporting, handling, and stacking activities of pellets.
The ultimate and proximate analysis, as well as the lower calorific value (LCV) of the sample were taken from the manufacturer's specifications sheet, see Table <ref>.
§.§ Ignition
When the ignition energy is too strong relative to the chamber size, it can cause a significant increase in pressure <cit.>. Several studies have shown that using a 10 kJ ignition source in a small volume such as the 20L Siwek sphere, leads to an overdriving effect that cannot be ignored <cit.>. In closed vessel testing, the overdriving effect can lead to an overestimation of the explosion severity parameters <cit.>, particularly the deflagration index K_st and the minimum explosive concentration (MEC).
Each of the 5kJ Sobbe® chemical igniters is charged with 1.2 g of a pyrotechnic powder mixture of 40% w.t. zirconium, 30% barium nitrate, and 30% barium peroxide <cit.>. They are activated electrically with internal fuse wires. To determine whether an overdriving effect takes place in the 1 m3 silo, we conducted a blank test experiment. The pressure-time trace developed by the 2x5 kJ igniters alone was measured in the closed silo and free of any combustible dust. Fig. <ref> presents a comparison between the resulting overpressures in the 1 m3 silo and a 1 m3 ISO chamber from the literature <cit.>.
Clearly, there is jump-like behavior during the first moments after the igniters are triggered. This is because the igniters deliver their energy in very short times (∼ 10 ms) <cit.>. The maximum overpressure is registered as approximately 30 mbar g and matches reasonably well the same time-dependency as the experiment in the 1 m3 ISO chamber. Moreover, the 30 mbar g overpressure is consistent with the values reported in other studies <cit.>. According to data collected from blank test experiments in the 20L Siwek sphere, the pressure increase due to 10kJ igniters can vary between 0.8 and 1.6 bar <cit.>. Therefore, when compared to the overpressure in any 1 m3 volume, the overdriving effect can be safely regarded as negligible.
§ GAS AND PARTICLE PHASE MODELING
In this work, the vented biomass dust explosions were simulated in the 1 m3 silo by employing our customized version of OpenFOAM's coalChemistryFoam code <cit.>.
This CFD code is a transient solver of two-phase (gas-solid) flow suitable to model compressible flow with turbulence, combustion, chemical reactions and radiative heat transfer. The solver uses an Eulerian-Lagrangian method to solve the particle-laden flow within a two-way coupling regime. Source terms are computed to represent the exchange of
mass, momentum, energy and chemical species between the two phases. The Lagrangian framework allows for a detailed analysis of biomass burning, including modeling of sensible heating and thermochemical conversion of biomass. To reduce the computational burden, physical particles are replaced with computational parcels, which group together particles with similar properties and whose extensive properties are scaled by a number density.
§.§ Gas phase governing equations
In CFD simulations of dust explosions, the Reynolds-Averaged Navier Stokes (RANS) closure is often used. The gas phase governing equations consist of the Reynolds-averaged mass, momentum, energy and species transport equations. The mass transport is
∂ρ̅/∂t + ∂/∂x_i(ρ̅ ũ_i)= Γ_i
where the overbar denotes that the scalar is Reynolds-averaged and the tilde denotes density-weighted time averaged or Favre-averaged. As reacting particles can exchange mass with the gas phase, the source term Γ_i is included in Eq. (<ref>) to account for the fluid/particle interaction. The momentum transport equations are
∂/∂t(ρ̅ ũ_i) + ∂/∂x_j(ρ̅ ũ_i ũ_j)= - ∂p̅/∂x_j + ∂τ̅^ij/∂x_j+∂/∂x_j(-ρ̅ u_i^' u_j^')
+ρ̅ g_i + Λ_i
where the Reynolds stress term is calculated using the Bousinessq hypothesis -ρ̅u_i^' u_j^'=2μ_tS_ij-2/3ρ̅k. The standard k-ε turbulence model is used to determine the eddy viscosity μ_t=ρ̅C_μk^2/ε, where k is the turbulent kinetic energy and ε is the turbulence dissipation rate. k and ε are modeled using the following transport equations
∂/∂t(ρ̅k) + ∂/∂x_i(ρũ_̃ĩk)=∂/∂x_i[(μ+μ_t/σ_k)∂k/∂x_i]+P_k-ρ̅ε
∂/∂t(ρ̅ε) + ∂/∂x_i(ρũ_̃ĩε)=∂/∂x_i[(μ+μ_t/σ_ε)∂ε/∂x_i]+C_ε1ε/kP_k-C_ε2ρ̅ε^2/k
Again, a source term Λ_i is included in Eq. (<ref>) to represent the momentum exchange due to particles. The enthalpy transport equation is
∂/∂t(ρ̅ h) + ∂/∂x_i(ρ̅ ũ_i h) = D p̅/D t - ∂q̅_̅i̅/∂x_i + τ^ij∂u_i/∂x_j+Θ_i
where Θ_i is a source term that accounts for the combined effect of: (1) the homogeneous gas phase reactions, (2) the enthalpy exchange due to the thermochemical conversion of the biomass particles, and (3) the radiative heat transfer. The species transport equation is
∂/∂t (ρ̅ Y_k) + ∂/∂x_i (ρ̅ ũ_̃ĩY_k) = ∂/∂x_i(ρ̅ D_k∂Y_k/∂x_i) + ω̇_k + Φ_k
where Y_k is the mass fraction of species k in the gas mixture and ω̇_k is the chemical reaction rate. The source term Φ_k represents the species released/consumed by the particle devolatilization and char conversion.
§.§ Particle governing equations
In biomass dust explosions, the interaction between the particles and the surrounding medium is through mass and momentum exchange and heat transfer. In the CFD model, each biomass particle is a reactive multi-phase entity, whose content of liquid, gaseous and solid matter is based on the proximate analysis. The mass conservation for each particle is written as
dm_p/dt=ṁ_moisture+ṁ_volatiles+ṁ_char
where ṁ_moisture, ṁ_volatiles, ṁ_char denote the rate of evaporation, devolatilization, and char oxidation. After all the reactive content is depleted, the biomass is reduced to an inert ash particle. Along its entire thermal history, the particle temperature is obtained from the energy conservation
m_pC_pdT_p/dt = πd_pk_gNu(T_∞-T_p) + dm_p/dtΔH + πd_p^2ε_0σ(θ_R^4-T_p^4)
where dm_p/dt and Δ H denote the rate of mass consumption within a particle and its associated latent heat due to one of the three mechanisms
dm_p/dt =
{
[ πd_pD_0Sh(p_sat,T/RT_m-X_wp/RT_m)M_w evaporation; -k(T)(m_p-(1-f_VM_0)m_p_0) devolatilization; -πd_p^2 p_o(1/R_diff+1/R_kin)^-1 char oxidation ]
.
The heat and mass transfer numbers, Nu and Sh are found using the Ranz-Marshall correlations for spherical particles <cit.>. Eq. (<ref>) states that biomass combustion can be seen as a three-stage, sequential process: (1) evaporation of moisture, (2) thermal cracking of biomass into light gases, and (3) the heteregenous conversion of char. The evaporation of moisture consists of the endothermic phase change of liquid water contained within the particle into water vapor that is added to the gas phase. The devolatilization or thermal cracking of biomass is the release of volatile gases that are further combusted in the gaseous phase. Contrarily to other solid fuels (e.g., coal), the overall heat release of biomass samples is dominated by the combustion of these gases. For example, the volatile matter in Pellets Asturias represents more than 75% of the total mass, see Table. <ref>. The remaining char is burned by the heteregeneous reaction with oxygen.
Depending on various characteristics, e.g., the heating rate, particle residence time or particle temperature, the gas species composition during devolatilization can be quite diverse. For the sake of model simplification, the volatiles are represented as a postulate substance C_xH_yO_z, whose x, y or z subscripts are calculated from the ultimate and proximate analysis. During devolatilization each biomass particle breaks down into the following 4 light gases <cit.>
CxHyOz ν_1^'' CO + ν_2^'' CO2 + ν_3^'' CH4 + ν_4^'' H2
LCVVM = ∑_i=1^4Y_i×ΔH_R,i
where the lower calorific value (LCV) of volatiles VM is found assuming that the LCV of biomass can be split into the combustion of its separate elements <cit.>
LCV_biomass = Y_VM^daf×LCV_VM + Y_FC^daf×LCV_FC
Under these considerations, the postulate volatile substance is C_1.03H_2.13O_0.97 and the stoichiometric coefficients ν_i^'' in Eq. (<ref>) are 0.07, 0.44, 0.51, and 0.03 for CO, CO2, CH4, H2 respectively. In all simulations, these gases are combusted following the 4-step reaction mechanism proposed by Jones & Lindstedt <cit.>.
The kinematics of the particles is governed by Newton's 2nd law
du_p_i/dt=18μ/ρd_p^2C_DRe_p/24(u_i-u_p_i)+g_i(1-ρ/ρ_p)
C_D =
{
[ 0.424 Re_p> 1000; 24/Re_p(1+1/6Re_p^2/3) Re_p≤1000 ]
.
where the RHS terms of Eq. (<ref>) represent all the forces acting on the particle, namely drag, gravity and buoyancy. The drag factor is determined by the correlation for spherical particles proposed by Putnam <cit.>, Eq. (<ref>). Moreover, we use a stochastic dispersion approach to include the effect of instantaneous turbulent velocity fluctuations on the particle trajectories. With a given u_p_i the position of the particle is computed by integrating the equation dx_p_i/dt=u_p_i.
Our customized solver includes more comprehensive submodels for the simulation of the biomass devolatilization and radiative heat transfer phenomena. Specifically, it uses the BioCPD model <cit.> to determine the devolatilization kinetics of biomass samples at elevated heating rates and uses Mie theory calculations to estimate the radiative properties of particles. Moreover it uses a
a dry/wet weighted-sum of gray gase model (WSGGM) model to calculate the absorption coefficient of the gaseous mixture <cit.>. For a detailed description of the complete method and the other submodels, the reader is referred to our previous works <cit.>.
§.§ Computational grids
In order to simulate the full range of stages in a dust explosion (including dispersion, explosion, and venting), three separate computational grids were employed, see Fig. <ref>. Mesh 1 corresponded to the entire silo, encompassing both the dispersion system and the silo itself. Mesh 2, on the other hand, focused solely on the inner region of the silo, excluding the dispersion system. Finally, mesh 3 was created as an exact copy of mesh 2, but with additional cells to represent the far field region. All grids were manually constructed using the ANSYS ICEM meshing software and subsequently converted to OpenFOAM format for simulation purposes.
§.§ Solution strategy
The vented dust explosion was simulated in 3 stages, namely: (1) dust dispersion, (2) explosion in closed silo, and (3) vented explosion.
* Dust dispersion: the dust particles are initially placed in the dust container at stagnant conditions and the pressure field is initialized accordingly (i.e. 0.875 bar a in the silo and 21 bar a in the dust canister). The ensuing pressure gradient drives the particles from the canister to the silo. The dust injection is simulated for an ignition delay time t_d=600 ms. Right afterwards, the case is stopped and all the Eulerian and Lagrangian fields are mapped from mesh 1 to mesh 2.
* Explosion in closed silo: starting from the cold flow solution, the reactive features of the solver (combustion, chemistry, radiation, etc…) are switched on. The ignition mechanism is activated and the dust cloud starts burning. In these simulations, the chemical igniters are again represented by a 10kJ enthalpy source term distributed over a kernel sphere of r=13 cm <cit.> placed right above the axisymmetric rebound nozzle. During the run-time, the instantaneous pressure is monitored each time-step as the weighted-area-average value on the roof surface. The simulation is stopped as soon as the monitor hits the rupture pressure of the polymer bolts, i.e. p_stat=1.570 mbar a. Next, the Eulerian and Lagrangian fields are mapped from mesh 2 to mesh 3.
* Vented explosion: once mesh 3 is initialized with the latest reactive solution, the boundary condition at the venting areas are switched from walls to interior cells. The number of venting areas is changed based on the venting scenario. For the rest of simulation, the deflagration is allowed to escape to the surroundings and the pressure inside the 1 m3 silo decays.
This approach enabled us to simulate the entire dust explosion using computational meshes tailored to specific purposes. Specifically, the grids 2 and 3 are structured and were meshed manually with topologies comprising hexagonal blocks. This helps ensure that all fluxes in the discretized equations, which involve numerous physics, pose high orthogonality and converge correctly. The boundary conditions and initialization settings of each stage of the simulation are provided in Table <ref> and Table <ref>, respectively.
The conservation equations of the Eulerian phase were discretized using first-order upwind schemes and second-order difference schemes for the convective terms and diffusive terms, respectively. The gradients were evaluated using a cell-limited scheme scheme with cubic interpolation. The transient discretization was treated with a first-order Euler scheme with an adaptive time-stepping method to satisfy a Courant-Friedrich-Lewy (CFL) condition of CFL=1.0. The pressure-velocity coupling was solved by the PIMPLE algorithm with 3 correctors per time step. The flow residuals were set to 10^-8 for continuity/pressure, and to 10^-12 for momentum, energy, species and turbulence equations, respectively.
§ RESULTS AND DISCUSSION
§.§ Dispersion system
The transient behavior of the pressurized air injection can be estimated if we assume that the air is an ideal gas and that the discharge is modeled as a poly-tropic process (p^n=C), see Fig. <ref>. If the pressure, temperature, and density in both the canister and 1 m3 silo are given at t=0, the pressure evolution p_C,t+Δ t in either control volume C 1 or 2 can be found as
p_C,t+Δt=p_C,t(ρ_C,t+Δt/ρ_C,t)^n
where n is the poly-tropic exponent. The density at time t+Δ t is estimated by applying conservation of mass to the corresponding control volume
d ρ_C/dt=±ρ_0A_tM_t(γR T_0)^1/2/(1+γ-1/2M_t^2)^-γ+1/2(γ-1)
where a positive value represents a charging process and a negative value represents a discharging process. Since the vessels are connected, the mass leaving the canister equals the mass entering the silo. The RHS of Eq. (<ref>) refers to the mass flow rate at the throat area A_t considering the compressibility effects. The properties at the throat are calculated using isoentropic relations and assuming that the canister is at stagnant conditions (T_0=T_c and ρ_0=ρ_c)
T_c/T_t =1+γ-1/2M_t^2
ρ_c/ρ_t =(1+γ-1/2M_t^2)^1/(γ-1)
The velocity at the throat V_t is related to the Mach number as V_t=M_t(γ R T_t)^1/2 whose sonic or subsonic behavior is determined by the choked flow condition. Finally, the temperature evolution T_C,t+Δ t in any of the control volumes is predicted using the ideal gas law
T_C,t+Δt=p_C,t+Δt/ρ_C,t+ΔtR
Performance of the dispersion system was checked by
comparing the experimental measurements with the 0D poly-tropic model implemented in Matlab. After iterating over various poly-tropic exponents, the best agreement of the final pressure reading was found for n=1.3. The pressure trends in both the canister and 1 m3 silo are shown together with the model results in Fig. <ref>. Although the ignition delay time was 600 ms, the experimental reading suggests that the pressure in the silo and canister stabilizes around 500 ms. In contrast, the time for stabilizing the pressures calculated by the model varies and is slightly ahead of the experimental data. Such offset can be attributed to various factors: (1) the spatial effects are not resolved (such as the length and curvature of the conduit), (2) friction effects are neglected, and (3) the time delay of the electropneumatic valve is ignored.
As illustrated in the figure, the pressure discharge in the experiment does not commence immediately. Instead, it appears to be delayed by approximately 20 ms before it gradually develops. To account for irrecoverable losses and improve the model prediction, a discharge coefficient C_d could be introduced; however, this is difficult to estimate based on the number of parameters in the experiment.
Despite its simplicity, the poly-tropic model is a useful tool to get reasonable estimates of the behavior of the pressurized air injection. It helps to calculate the desired pressure upon commencing the explosion test and to establish an according ignition delay time. In addition, it can provide other useful information, such as the fact that in our current setting, the flow is highly turbulent for almost half of the air blast. During this period, the Reynolds number is in the order of 𝒪(Re)∼10^6 and the flow velocity becomes subsonic only after approximately 300 ms.
§.§ Non-reactive flow
In dust explosions the particle size is a crucial factor, as it influences not only the dynamic behavior of the dust cloud, but also the reactivity of the fuel. Thus, it is essential to analyze dust dispersion in order to fully comprehend the development of the explosion. Dust dispersion by pressurized air is the most common method for dust explosion testing. In this method, the dispersion nozzle plays a leading role in the whole dispersion system, as it distributes the dust inside the test vessel and regulates the flow pattern and turbulence intensity. In this work, the axisymmetric rebound nozzle produces a recirculating flow pattern characterized by four large vortices that emerge from the nozzle outlet and extend up to the top of the silo. These vortices are distributed symmetrically and rotate in clockwise direction (right vortex) and counter-clockwise direction (left vortex), as depicted by the streamlines in Fig. <ref>. The top row of the figure corresponds to the dust-free flow injection, while the bottom row illustrates the behavior with a nominal dust concentration C_0=500 g m-3.
In all the experiments, the ignition delay time is 600 ms, and throughout the following discussion, all graphs and figures represent this time interval from -600 ms to 0 ms. In the early stage (as shown in Fig. <ref>), the cores of the vortices are located at the top, specifically at a height of 0.6 < y/H < 0.8, which is very close to the corners. When dust is injected, the vortices are still able to form, but their location is altered. The vortices shift downward to a height of approximately y/H = 0.5 for the same time interval. This occurs because the first injected particles have high velocities and collide strongly with the silo roof. As a result, they descend and hinder the vortices from extending all the way to the top.
As the flow progresses and particles are fully injected, the vortices rise vertically and the cores return to their original positions at the corners (refer to Fig. <ref>). Moreover, by the end of the dispersion process, small vortices form at tip of the roof. Unfortunately, these recirculating flow structures do not facilitate dust cloud mixing, as they behave as stagnant zones, where only air is trapped. This flow pattern bears striking resemblance to that of the 20L Siwek sphere, where Benedetto et al. <cit.> exposed that the vortices tended to deposit most of the particles near the vessel walls. However, in case of the 1 m3 silo, the particles accumulate in the central section of the silo, particularly in a central column.
Fig. <ref> depicts the particle distribution in two half-plane slices, revealing that the cloud is highly concentrated in the region between -0.5 ≤ x/R ≤ 0.5. The concentration in this central column varies locally along the height and is notably highest in the lower zone, where it reaches a maximum of 20 kg m-3 before decreasing to 2.5 kg m-3 in the uppermost part of the column.
When injecting a dust sample with large particle diameters, the particles may behave ballistically and interact very little with the air flow. A convenient way to determine if the particles trajectories adjust to the air streamlines is by calculating the Stokes number. The Stokes number is a parameter that relates the particle response time to a characteristic time scale of the fluid, which can be used to determine whether the particles are in equilibrium with the air or not. The Stokes number is:
Stk = τ_p/τ_f
τ_p = ρ_pd_p^2/18μ_f24/C_DRe_p
where C_D, Re_p and τ_f are the particle drag coefficient, the particle Reynolds number, and the characteristic fluid time scale τ_f=l_e/|ũ_i|, respectively. In these calculations, l_e is the integral length scale C_μk^3/2/ε, the drag coefficient obeys the correlation shown in Eq. (<ref>) and the particle Reynolds number is
Re_p=|ũ_i-u_p_i|d_p/ν_f
For particles with a Stokes number less than unity, the fluid can alter their trajectories and cause them to closely follow the motion of vortices. However, when the Stokes number of particles is greater than unity, their strong inertia impedes the fluid streamlines from altering their trajectories. Fig. <ref> clearly shows that the cloud has a small radial aperture and very few particles are contained within the vortices.
Fig. <ref> shows that the Stokes number logarithmically varies as a function of particle diameter, where a threshold value of Stk = 1.0 indicates a critical particle diameter of apparently 100 microns. Particles larger than this size display a response time up to two orders of magnitude greater than the fluid time scale, meaning that it takes significantly longer for them to adapt to vortex motion. Based on the particle size distribution depicted in Fig. <ref>, over 90% of biomass dust particles are affected by this condition, as their adaptation time is over 100 times longer than the fluid time scale. Only particles with diameters less than 100 microns are able to interact with recirculating flow. Furthermore, the figure illustrates that the larger the particle diameter, the shorter its distance from the central dust column. Conversely, as particle diameter decreases, the particles can be dispersed over a wider range of radial distances.
In addition, Fig. <ref> shows that the particle Reynolds number also displays a logarithmic dependence on particle diameter, with particle inertia being up to four orders of magnitude greater than fluid inertia. This confirms that large particles exhibit a ballistic behavior and thus, the dust cloud is unable to mix homogeneously with the air.
Another important aspect of dust explosions is the initial turbulence at which the dust cloud ignites. Turbulence is a function of many aspects, such as the dispersion nozzle, the flow pattern, the dust concentration, the particle size or the ignition delay time. In explosivity tests in standardized vessels, it is well known that an increase in turbulence levels increases the rate of pressure rise. Fig. <ref> shows a comparison of the velocity fluctuations between the air-only case and the dust case.
For the air-only case, a maximum value of u^'_RMS=16 m/s is reached just about 40 ms after the start of the injection, whereafter the turbulence is strictly decreasing until the end of the injection delay time. This happens because the intensity of the pressure gradient decreases steeply and continuously until the pressures of both, the canister and the 1 m3 silo stabilize. In addition, the mechanisms of turbulence production such as wall friction and shear layers are not strong enough to counteract the decay of the pressure gradient. On the contrary, for the case with dust injection, there are two periods of turbulence decay. The first of these periods corresponds to the times between -600 and -400 ms, where the velocity fluctuations are smaller than in the case with air injection only. This is because the discharge of particle-laden air obstructs and slows down the flow entering the silo, debilitating the baroclinic force and decreasing the incoming velocity. The second period corresponds to the times between -260 and 0 ms, where the velocity fluctuations reached a maximum value of u^'_RMS=8 m/s and is higher than the case of injection with only air.
The intermediate time between -400 and -260 ms is a period of stabilization and turbulence production. Here, the decay of the first period is counteracted mainly for two reasons. The first is that by this time 70% of the nominal dust concentration has already entered the 1 m3 silo, which frees the flow from local blockages caused by the particles. The second is that the particles already injected create local distortions in the flow, generating self-induced wakes and vortices around the particles. In this period, the velocity fluctuations double, rising from u^'_RMS=4 m/s to 8 m/s.
Interestingly, the turbulence of the particulate flow is consistently higher than that of the air-only flow from -300 to 0 ms, as shown in Fig. <ref>. This period is referred to as "turbulence enhancement". This finding is significant because the intensity of turbulence before dust cloud ignition can considerably impact the rates of heat transfer and chemical reactions during the explosion, a phenomenon known as turbulence modulation. Turbulence modulation is the inertial effect of particles on the flow turbulence relative to non-particle flow. Crowe <cit.> identified some factors that influence turbulence modulation: surface effects, inertial effects, and response effects.
On the one hand, the inertial and response effects are determined by the dimensionless numbers mentioned earlier. Because the particle Reynolds number and Stokes number cover several orders of magnitude above unity, the dispersion of this biomass sample is likely to significantly affect the flow turbulence compared to the flow without particles.
On the other hand, surface effects are determined by the particle diameter normalized by a length scale representative of the flow. As the dispersion time comes to a close, turbulence decreases for both dust-free flow and particle-laden flow. The Kolmogorov scale, which represents the smallest eddies where viscosity is the dominant force and turbulent kinetic energy is dissipated in heat, serves as a characteristic length scale for turbulent eddies. Fig. <ref> shows the change in turbulence intensity versus particle diameter normalized by the Kolmogorov length scale. The percentage change in turbulence intensity is defined as
ΔTurb. Intensity %=Turb. Int.TP-Turb. Int.SP/Turb. Int.SP×100
where the turbulent intensity is calculated based on the hypothesis of isotropic turbulence, Turb. Int=√(2k/3)/|ũ_i| and the subscripts TP and SP refer to the two-phase and single-phase flows, respectively. All values were calculated locally at the particle positions inside the 1 m3 silo.
The Gore-Crowe classification identifies the critical value d_p/η = 0.1, which marks the threshold at which higher values of d_p/η will cause an increase in the turbulence intensity of the entrained gas, and lower values will cause a decrease. The figure shows that the particles generally follow this criterion, with most of the data points lying to the right of the threshold value and above the horizontal line representing zero. Notably, particles located closest to the central dust column exhibit the highest d_p/η ratios.
A possible explanation for this phenomenon is that the particles generate turbulence in their wake at the length scale of the smallest eddies, resulting in an increase in the turbulence intensity of the air. In this case, the energy is transferred from the particles to the turbulent kinetic energy of the trailing gas. Therefore, according to the map, when injecting a dust loading of 500 g m-3, the local increase in turbulence intensity can be as high as 4000%.
§.§ Reactive flow
The venting experiments were conducted during the last quarter of year 2022 in the experimental test tunnel of Applus+ TST. A delimiting explosive area was established and the 1 m3 silo was safely installed and anchored to the ground. The tests were carried out under the supervision and support from IDONIAL and Applus+ TST personnel and were operated from a remote control unit. All tests were performed for a concentration of 500 g m-3 with an ignition delay time of 600 ms and an activation energy of 10 kJ. Fig. <ref> shows a qualitative comparison of the flame propagation between the experiment and the simulation during a vented explosion with 1 hatch open (venting percentage of 3.83%).
Upon the rupture of the polymer bolt, the hinged hatch opens violently and lets a jet flame to escape
perpendicularly to the silo roof. The gaseous flame comes out accompanied with burning biomass particles and an elevated inertia. Remarkably, the CFD simulation predicts a flame shape that resembles very well to the experimental observation, with a flame temperature that is close to 1500 K and that extinguishes after a few seconds.
Fig. <ref> displays the pressure profile recorded in the 1 m 3 silo and canister during the test. The graph illustrates the pressure increase in the silo on the left side and the pressure discharge in the canister on the right side. As noted, the pressure profile predicted by the CFD model agrees very well with the experimental test, both showing a maximum pressure value of 1570 mbar a. This value corresponds to the rupture pressure of the polymer bolts, which triggers the opening of the hinged hatch, allowing the pressure wave and flame to escape from the silo to the surroundings. Shortly after, the pressure inside the silo drops abruptly to atmospheric pressure.
A key metric of the test is the time taken to vent the explosion. The CFD model predicts that the static pressure of the bolts is reached 785 ms after activating the igniters, while the experimental test records a time of 757 ms. The relative error is 3.5% and endorses that the model is in excellent agreement with the experiment, not only capturing the maximum overpressure or the flame propagation, but the transient behavior in general.
To quantify the fuel burned during the explosion, we analyzed the consumption of each reactive component in the dust cloud, as shown in Fig. <ref>. The graph is divided into two regions: (1) before the opening of the hatch (p < p_stat) and (2) after the opening of the hatch (p > p_stat). The data in the first region of the graph shows that prior the opening of the hatch, the O2 mass fraction decreased from 0.23 to 0.17. This suggests that the dust cloud burned in small amounts, with only approximately 10% of the volatile gases being released from the particles. Despite this, the combustion of such small amount of gases was enough to create an overpressure of 570 mbar g inside the silo.
In the second region of the graph, it is evident that the other reactive components of the cloud burned slowly, with a slight increase in the consumption rates starting at 1500 ms. However, even after 2 seconds, only 40% of the volatile gases had been released and significant amounts of moisture and char remained in the particles. Based on the proximate analysis of the biomass dust, only 50 g of the available 500 g of mass had been consumed by the time the hatch opened, and only 194 g had been consumed after 2 s of ignition.
Fig. <ref> illustrates the evolution of the dust explosion during all the stages of the experiment. The first 6 contours represent the dust injection phase, which is non-reactive flow. These contours are colored by dust concentration and are spaced every 100 ms until the time 0 is reached. From that point, the temperature contours of the reactive flow are displayed up to 1500 ms.
The flame development begins with the activation of the pyrotechnic igniters, which, as mentioned before, are modeled as a 10kJ sphere of radius 13 cm placed above the axisymmetric rebound nozzle. These igniters generate the initial flame that induces the dust particles to produce a self-propagating flame kernel. The resulting flare propagates vertically, creating a mushroom-shaped flame. This is primarily due to 2 factors:
* First, hot gases are drawn upward by the flow pattern and velocity field achieved at the end of the dispersion process. This is evident from Fig. <ref>.
* Second, the flame spreads in the direction where the fuel is present, which is linked to the distribution of the dust cloud during the injection phase. As discussed in the previous section, the large particle size of the dust resulted in a thin central cloud distribution.
Once the flame reaches the uppermost part of the silo, it hits the roof and expands radially, seeking areas where the local equivalence ratio allows for stoichiometric combustion. The mushroom appearance is due to the fact that the local dust concentration is higher near the silo roof than within the vortex cores, where the dust/air mixture is extremely lean, as shown in Fig. <ref>.
Starting at 785 ms, the temperature contours correspond to the vented explosion. The jet flame can be observed escaping perpendicular to the roof and reaching its maximum length and temperature within the next 100 ms. This phenomenon can be attributed to the pressure gradient that propels the flame from the silo to the surrounding atmosphere. Once the pressure in the silo stabilizes with atmospheric pressure, the jet flame partially extinguishes. After 1000 ms, the flame is reignited due to the unburned particles that left the silo and encountered fresh oxygen in the atmosphere. However, the flame weakens and bends down since the velocity magnitude of the jet has decayed significantly.
An additional experimental test was conducted with two open hatches and identical operating conditions as the previous test. Pressure curves are shown in Fig. <ref>. Again, once the rupture pressure of the bolts is reached, the pressure in the silo decreases rapidly until it reaches atmospheric pressure. However, this time the CFD model is slightly delayed compared to the experimental results, with the static pressure of the bolts being reached at 785 ms in the simulation compared to 600 ms in the test. This can be attributed to an unexpected incidence with the experimental test. According to the pressure discharge in the canister, the experimental reading registered a bump right below the 5 bar g and during almost the second half of the ignition delay time. This delay may have occurred due to a dust blockage in the dispersion duct, which slowed pressure balancing in both vessels. A weakened pressure gradient may cause the initial pressure in the silo to decrease at the moment of dust cloud ignition.
Igniting the air/dust mixture at pressures below 1 bar may affect mass transfer rates, particularly the evaporation rates due to the pressure-dependent nature of water's phase change from liquid to vapor state. At pressures below 1 bar, the evaporation point of moisture is reduced, so during the first 100 ms of the experimental test, the dust cloud may have burned faster than if it were ignited at atmospheric pressure. This faster evaporation of moisture may result in the earlier release of volatile gases, advancing the pressure rise curve. Nevertheless, the CFD model predicts transient effects quite accurately.
Fig. <ref> illustrates the evolution of this experiment. As only the number of open hatches was changed for this test, all contours up to 785 ms are identical to those of the previous explosion. Two flame jets escape perpendicularly to the silo roof, developing their maximum length and temperature within a few milliseconds after opening. The flames partially extinguish after 900 ms and appear again after 1100 ms. The first jet flame is a consequence of the pressure wave that drags the hot gases and particles towards the far field, while the second flame is due to the fact that both the particles and the fresh gases escaping from the silo encounter abundant oxygen in the surroundings. The forming flames attach to the periphery of the hatches and bend downward because of a debilitated velocity field. Also, it is interesting to note that the flame inside the silo maintains its mushroom shape and does not propagate into the interior of the vortices. This suggests that the particles never mixed with the recirculating flow pattern and remained distributed in the central column previously studied.
Finally, we identified the particle size with higher reactivity at the time of hatch opening, Fig. <ref> classifies the consumption of each fuel component based on particle diameter. The amount of gases released from the dust cloud is higher than the burned mass of the fixed char, as expected due to the higher volatile matter content in biomass combustion. Furthermore, particles with a diameter close to 300 μm not only reached the highest temperatures, but also exhibited the highest reactivity with respect to devolatilization. While this may seem counter-intuitive, in our previous works <cit.>, we have consistently emphasized that a dust explosion, specifically flame propagation, cannot be attributed solely to isolated factors such as particle size, dust distribution, velocity field, or residence time. Instead, it is the combined effect of all these factors that drives the phenomena. Within our 1m3 silo, we have demonstrated the non-uniform dispersion of dust, with particles concentrated in a central column, particularly larger particles (>100 μm) that represent the majority of the particle size distribution (as shown in Fig. <ref>). Additionally, particles smaller than 100 μm exhibit some radial scattering, as depicted in Fig. <ref>. Consequently, it is the larger particles that ignite the air/dust mixture in our specific case, contradicting conventional expectations. This counter-intuitive behavior arises because the larger particles are aligned with the ignition source and effectively convect the flame vertically upward, as evidenced in Fig. <ref> in conjunction with either Fig. <ref> or Fig. <ref>.
A notable distinction arises when comparing the flow characteristics achieved in apparatuses like the Godbert Greenwald (G-G) furnace to that of our 1m3 silo. The (G-G) furnace, often used for pyrolysis studies at high-heating rates <cit.>, exhibits a flow with relatively lower turbulence levels and enhanced uniformity due to its simplified geometry and absence of dispersion nozzles.
The small pressure gradient used for dust dispersion in the (G-G) furnace leads to a flow environment that is potentially more uniform. This results in a lower level of turbulence and enhanced consistency in terms of velocity field, temperature, and dust distribution. As a consequence, all particles, irrespective of their size, have the opportunity to react under similar conditions, making it easier to establish correlations between devolatilization times and particle sizes. In contrast, our 1m3 silo experiences significantly higher turbulence and attains sonic velocities. The complex dust dispersion pattern observed within our silo further contributes to the non-homogeneous nature of the combustion process. Therefore, “turbulent dust flames depend on the nature of the turbulent flow field and thus on the experimental apparatus and are not basic to the dust itself” as suggested by Smoot <cit.>.
§ CONCLUSIONS
In this study we conducted experimental tests and CFD simulations of biomass dust explosions in a newly developed 1 m3 silo apparatus designed for analyzing explosions in situations with variable venting. We examined all stages of a biomass dust explosion, including dust dispersion, ignition, closed and vented explosion. Our CFD results indicate that the flow characteristics after dust dispersion plays a crucial role in flame propagation and the explosion itself, and depends largely on particle size and the dispersion system. The turbulence prior to ignition and distribution of the dust particles also significantly affect the reactive characteristics of the cloud. During the explosion, our CFD model accurately predicted the time evolution of the pressure, particularly with regard to maximum overpressure and pressure relief. We observed similar pressure drops for the two venting scenarios studied.
The promising results obtained from our CFD simulations encourage the use of our CFD model to simulate larger scale geometries for further investigation of dust explosions. Future work will involve simulating additional test cases to gain a deeper understanding of the explosion behavior of biomass dust, especially in venting situations that fall out of the scope of the NFPA 68 or EN 14491 standards, and to help design effective safety measures to prevent such incidents.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
A. Islas: Conceptualization, Formal analysis, Data curation, Methodology, Software, Validation, Investigation, Resources, Writing - original draft, Writing - review & editing, Visualization. A. Rodríguez Fernández: Methodology, Software, Validation, Investigation, Resources, Writing - review & editing. E. Martínez-Pañeda: Conceptualization, Writing - review & editing, Funding acquisition. C. Betegón: Writing - review & editing, Supervision, Project administration, Funding acquisition. A. Pandal: Conceptualization, Methodology, Software, Investigation, Resources, Writing - review & editing, Supervision, Funding acquisition.
§ ACKNOWLEDGEMENTS
Authors acknowledge that this work was partially funded by CDTI (Centro para el Desarrollo Tecnológico Industrial de España, IDI-20191151), Universidad de Oviedo and PHB WESERHÜTTE, S.A., under the project "FUO-047-20: Desarrollo de silo metálico de grandes dimensiones ante los condicionantes de explosividad de la biomasa". Likewise, authors endorse the computer resources provided in the Altamira Supercomputer at the Institute of Physics of Cantabria (IFCA-CSIC), member of the Spanish Supercomputing Network, and the technical support provided by the Advance Computing group at University of Cantabria (UC) (RES-IM-2022-3-0002). A. Islas acknowledges support from the research grant #BP20-124 under the 2020 Severo Ochoa Pre (Doctoral) Program of the Principality of Asturias.
unsrt_abbrv_custom
|
http://arxiv.org/abs/2307.04660v1 | 20230710155730 | The high-pressure phase diagram of BaNi$_2$As$_2$: unconventional charge-density-waves and structural phase transitions | [
"Tom Lacmann",
"Amir-Abbas Haghighirad",
"Sofia-Michaela Souliou",
"Michael Merz",
"Gaston Garbarino",
"Konstantin Glazyrin",
"Rolf Heid",
"Matthieu Le Tacon"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el"
] |
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany
[email protected]
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany
Karlsruhe Nano Micro Facility (KNMFi), Karlsruhe Institute of Technology, 76344 Eggenstein-Leopoldshafen, Germany
ESRF, The European Synchrotron, 71, avenue des Martyrs, CS 40220 F-38043 Grenoble Cedex 9
Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany
[email protected]
Institute for Quantum Materials and Technologies, Karlsruhe Institute of Technology, 76021 Karlsruhe, Germany
Structural phase transitions accompanied with incommensurate and commensurate charge-density-waves (CDWs) modulations of unconventional nature have been reported in , a non-magnetic cousin of the parent compound of Fe-based superconductors, BaFe_2As_2. The strong dependence of upon isoelectronic substitutions alongside original dynamical lattice effects suggest a strong tunability of the electronic phase of the system through structural effects. To gain further insights, we present a comprehensive synchrotron x-ray diffraction and first-principles calculation study of the evolution of the crystal structure and lattice instabilities of as function of temperature and hydrostatic pressures (up to 12 GPa).
We report a cascade of pressure-induced structural phase transitions and electronic instabilities up to ca. 10 GPa, above which all CDW superstructures disappear. We reveal that the stable high-pressure phase consists of planar Ni zigzag chains, from which the surrounding As have been pushed away. This yields a strong reduction of the interlayer As-As distance (along the original c axis), akin to what is observed in the collapsed tetragonal structure of other pnictides, albeit here with a monoclinic structure. The discovery of new polymorphs in the pressure-temperature phase diagram of emphasizes the importance of the relative Ni-Ni and Ni-As bond lengths in controlling the electronic ground state of this compound and replenish our understanding of viable electronic phases under extreme conditions.
The high-pressure phase diagram of : unconventional charge-density-waves and structural phase transitions
Matthieu Le Tacon
August 12, 2023
=========================================================================================================
§ INTRODUCTION
Superconductivity and charge-density-waves (CDWs) stand amongst the most commonly encountered instabilities of the metallic state and are often coexisting in the complex phase diagrams of quantum materials. Prominent examples encompass α-Uranium <cit.>, high-temperature superconducting cuprates <cit.>, transition-metal dichalcogenides <cit.> or the more recently discovered Kagome superconductors <cit.>. Both electronic orders have also been evidenced in , a weakly correlated metallic system which has at room temperature the same tetragonal I4/mmm crystal structure as the parent compound of Fe-based superconductors BaFe_2As_2<cit.>. Upon cooling, rather than a magneto-structural transition, exhibits an original form of dynamical lattice nematicity <cit.> before undergoing a series of CDW instabilities and structural distortions <cit.> and ultimately entering a low temperature superconducting phase below ∼0.6 K.
Incommensurate CDW (I-CDW) fluctuations have been associated with an enhanced elasto-resistance signal in the B_1g channel <cit.> and detected in thermal diffuse x-ray scattering already at room temperature <cit.> at the I-CDW wavevector (±0.28 0 0)_tet (note that throughout, this paper and for simplicity all reciprocal space indices are given in the tetragonal notation (H K L)_tet but for better readability the subscript _tet will generally be omitted). A long-range I-CDW order only develops fully at ∼ 147 K <cit.>, and triggers a minute orthorhombic distortion bringing the system into a Immm phase <cit.>.
Below T_tri∼ 137 K (upon cooling) the system then undergoes a first-order transition to a triclinic (P1̅) phase while the I-CDW is replaced by a commensurate CDW (C-CDW) with a wavevector (±1/30∓1/3) <cit.>.
Various substitutions have been used and yield a rapid decrease of T_tri alongside a sudden increase of the superconducting transition temperature T_c to ∼ 3.5 K which occurs when the triclinic phase is completely suppressed <cit.>.
Apart from the generic trends, details appear strongly dependent on the nature of the substitution. For instance ∼60% of Strontium on the Barium site are needed to suppress completely the triclinic phase <cit.>, an effect obtained with only ∼7% of Phosphorus substitution for As <cit.> or ∼12% of Cobalt on the Nickel site <cit.>.
All these substitutions are in principle isoelectronic and therefore do not change the charge carrier concentration in the system. On the other hand, these substitutions affect significantly the lattice parameters <cit.> in a non-trivial manner, which suggests in turn, akin to the Fe-based superconductors <cit.>, that pressure might be a valuable tuning parameter of the electronic phase of .
To the best of our knowledge, high-pressure investigations on this system have only been limited to resistivity measurements of pristine BaNi_2As_2 <cit.> and in a limited pressure range (up to 2.74 GPa) where only a modest dependence of both structural- and superconducting transition temperatures was found.
Here, we use high-resolution single crystal x-ray diffraction (XRD) to investigate the pressure and temperature dependence of the various structural phases of over a broader pressure range, and to construct a temperature-pressure phase diagram of this compound extending up to 12 GPa and down to 30 K. We have carried out systematic structural refinement and discovered a series of pressure-induced structural phase transitions, as well as a set of novel superstructures associated to new CDW instabilities (incommensurate and commensurate).
These instabilities show an highly unusual pressure dependence but are well described by our first-principles density-functional perturbation theory (DFPT) calculations which also emphasize the absence of Fermi surface nesting and only weak electron-phonon coupling enhancement of the phonon lineshapes (when any). This is in sharp contrast with the phenomenology encountered in prototypical CDW systems and indicate the unconventional nature of all these CDW instabilities. Above ∼ 10 GPa, all superstructure peaks disappear and our study reveals that the stable high-pressure phase is monoclinic C2/m and consists of planar Ni zigzag chains.
Akin to what is observed in the collapsed tetragonal (cT) phase of other pnictide compounds, an overall decrease of the out-of-plane c axis parameter is observed as the As-As distance between the Ni planes is strongly reduced. On the other hand, in contrast with known cT phases, the 'thickness' of the NiAs layers increases in the high pressure phase compared to that of the original tetragonal I4/mmm structure, as the As atoms are pushed away from the Ni planes. This peculiarity emphasizes the importance of the hybridization between the As 4p and Ni 3d orbitals, through the Ni-As and As-As distances, in controlling the electronic phase of .
§ EXPERIMENTAL DETAILS
§.§ Single-Crystal Growth and Characterisation
Single crystals of BaNi_2As_2 were grown using a self-flux method.
NiAs precursor was synthesised by mixing the pure elements Ni (powder, Alfa Aesar 99.999%) and As (lumps, Alfa Aesar 99.9999%) that were ground and sealed in a fused silica tube and annealed for 20 hours at 730°C.
All sample handlings were performed in an argon glove box (O_2 content < 0.1 ppm).
For the growth of BaNi_2As_2, a ratio of Ba:NiAs = 1:4 was placed in an alumina tube, which was sealed in an evacuated quartz ampule (10^-5 mbar).
The mixtures were heated to 700°C for 10 h, followed by heating slowly to a temperature of 1090°C, soaked for 5 h, and subsequently cooled to 995°C at a rate of 0.25°C/h.
At 995°C, the furnace was canted to remove the excess flux, followed by furnace cooling.
Plate-like single crystals with typical sizes 3 x 2 x 0.5 mm^3 were easily removed from the ingot.
The crystals are shiny brass-yellow with a metallic lustre.
The measured crystals were cleaved with a scalpel just before the x-ray scattering experiments.
§.§ High-pressure X-ray Diffraction
High pressure–low temperature experiments were performed at the European Synchrotron Radiation Facility (ESRF, beamline ID 15B) and Positron-Elektron-Tandem-Ring-Anlage III (PETRA III, DESY, beamline P02.2) in a membrane-type diamond anvil cell (DAC) using the ruby fluorescence method for the pressure calibration <cit.>. For the experiments at the ESRF diamonds with cullet diameters of 500 and a stainless steal gasket were used. Three BaNi_2As_2 singles crystals and one ruby were placed inside the gasket hole and helium was used as pressure transmitting medium. A sketch and a photograph of the DAC, the samples, ruby and gasket hole after gas loading are shown in Fig. <ref> a) and b). For x-ray diffraction a monochromatic beam with an energy of 30.17 (≈0.411) was used and the diffracted beam was detected with a Mar research MAR555 flat panel detector. For each dataset wide scans of 0.5 exposures and 0.5 intervals with a total angular rotation of ±32 were performed. The detector position and distance was calibrated with silicon powder and an Enstatite single crystal standard using the Dioptas and CrysAlis softwares.
For the measurements at PETRA III cullet diameters of 400 and rhenium gaskets were used. The cells were loaded with Neon as pressure transmitting medium. X-ray diffraction experiments were performed using a monochromatic x-ray beam with an energy of 42.71 (≈0.2903). The diffracted beam was detected with a Perkin Elmer XRD 1621 detector. The detector to sample distance was calibrated with a CeO_2 standard using the programme Dioptas. At each pressure and temperature, wide scans of 0.5 s exposures and 0.5 intervals and -25 to +30 of rotation were performed.
We will focus here on isotherms that were measured mainly by cooling from room temperature to the desired temperatures, at which we compressed the crystals to the desired pressure. The exact p-T path we have followed for each sample alongside additional data obtained along four isobars (at ∼ 4, 7.6, 10 and 12 , respectively) can be found in the Supplemental Material (SM) <cit.>. As an example, the (H K 1) plane from the measurement at 0.29 and 194 is shown in <ref> c) and the structure determined from a structural refinement of such dataset (SG: I4/mmm) is shown in Fig. <ref> d).
§.§ Analysis of the Crystal Structure
CrysAlis Pro was used for data collection, cell refinement, data reduction and the analysis of the diffraction precession images for all datasets. SHELLXS97 <cit.> and SHELLXL97 2014/7 <cit.> as well as JANA2006 <cit.> were used for solving the crystal structure and refinements. Crystal data, and structural refinement details are summarized in Tab. <ref> and in the SM <cit.>. Atomic coordinates and site labels were standardized using the VESTA <cit.> crystal structure visualisation software.
§ COMPUTATIONAL DETAILS
Density-functional investigations of lattice dynamics properties for the different structural phases of BaNi_2As_2 were performed in the framework of the mixed-basis pseudopotential method <cit.>. This approach employs an efficient description of more localized components of the valence states by using a basis set combining plane waves and local functions at atomic sites.
The electron-ion interaction is described by norm-conserving pseudopotentials, which were constructed following the descriptions of Hamann, Schlüter, Chiang <cit.> for Ba and Vanderbilt <cit.> for Ni and As, respectively. Semi-core states Ba-5p, Ni-3s, Ni-3p were included in the valence space.
The exchange-correlation functional was represented by the general-gradient
approximation in the PBE form <cit.>.
The mixed-basis set consisted of plane waves with a cutoff for the
kinetic energy of 22 Ry and local functions of p,d type for Ba and
s,p,d type for Ni, respectively.
Lattice dynamics properties were calculated within the linear response or density functional perturbation theory (DFPT) as implemented in the mixed-basis method <cit.>.
Brillouin-zone integration was performed by sampling a 16×16×8 k-point mesh in conjunction with a Gaussian broadening of 0.1 eV.
To locate positions of phonon anomalies in the momentum space, scans of the phonon dispersions on two-dimensional high-symmetry planes were performed as follows. Dynamical matrices were calculated within DFPT on an 8×8 mesh, and interpolated on a much denser 120×120 mesh using a standard Fourier interpolation technique. Diagonalizing the dynamical matrices provided phonon frequencies.
§ RESULTS AND DISCUSSION
In this section we present evidence for the existence of new HP-phases in BaNi_2As_2 which can be best seen following two isotherms at 140 (above the triclinic transition at ambient pressure) and 94 (below the triclinic transition at ambient pressure). Up to about 10, each of these HP phases is accompanied with a new type of CDW modulation. Above this pressure, CDW superstructures disappear.
§.§ Pressure dependence of the I-CDW: 140 isotherm
As previously discussed <cit.>, the formation of the long-range I-CDW at ambient pressure (hereafter referred to as I-CDW1) is accompanied with a fourfold symmetry-breaking transition. This is best seen as a difference between the thermal expansion along the (100) and (110) directions <cit.> and indicates a small but measurable orthorhombic distortion below ∼ 146.
Consequently, we can index the Bragg reflections obtained at 140 K and ambient pressure in a slightly distorted orthorhombic cell with space group Immm. The corresponding structural parameters are detailed in Tab. <ref>.
In agreement with previous reports <cit.>, I-CDW1 satellites are observed around Bragg reflections at (±0.2800) or (0±0.280), depending on the reflection.
The effect of pressure on the unit cell is reported in Figure <ref>-a), where we show that the a, b and c lattice parameters decrease smoothly with increasing pressure up to 7.
As the orthorhombicity increases upon pressurization, half of the I-CDW1 superstructure peaks disappear. The latter can be interpreted as a consequence of detwinning that could either originate from weak non-hydrostaticity in the pressure cell or from the anisotropic response of to strain. In parallel, we observe an increase of the incommensurability of the I-CDW1 from 0.28 at ambient pressure to 0.293 at 7.
Above ∼ 7 the I-CDW1 satellites disappear, and a new set of 8 incommensurate satellites appears close to wavevectors (± 0.358 ± 0.10 0) and (±0.10±0.3580) around e.g. the (220) Bragg peaks (Fig. <ref>-d) forming a new I-CDW, labelled I-CDW2 hereafter.
Furthermore, the new I-CDW2 shows a strong temperature dependence of the wavevector and onsets at a higher temperature of ≈168. Details can be found with the evaluation of the isobars around 4 and 7.6 in the SM <cit.>.
Note that although the original I-CDW1 peaks are lost above 7, some faint peaks with similar wavevector can be still seen up to 10, albeit now centered around forbidden Bragg-reflections such as the (1 2 0) (denoted as I-CDW1').
Additionally, the pressure dependence of a and b lattice parameters displays a sudden upturn indicating that a structural phase transition takes place between 7 and 7.6.
Our structural refinement in this region shows that the crystal structure is monoclinic and can be described within the space group C2/m. This structural phase transition involves small atomic displacements that break the translational symmetry of the lattice (or equivalently, domain-related distortions as the minute difference between cell parameters a and b in the Immm increases approaching the monoclinic phase) and a
shear displacement of the Ni layers against each other.
This amounts to a loss of symmetries of both the As- and Ni sites, as additional degrees of freedom are introduced in the Wyckoff-position 4i by breaking the correlation between x and z components at this position. The 4i site symmetry hereby changes from mm2 in the Immm to m in the C2/m phase.
In this phase, instead of four equivalently long Ni-Ni bonds, regular Ni zigzag chains with two long and two short bond distances form (lower panel of Fig.<ref>-a).
Above ∼ 10.2, all CDW satellites completely disappear. Although the symmetry remains the same as the I-CDW2 disappears, a closer look at the crystal structure reveals important internal changes (Fig.<ref>-a) both in and out of the NiAs planes.
In-plane, the Ni-Ni bond length disproportionation strongly increases (reaching 4.2% at 12 GPa) indicating an increased separation of the zigzag chains. In the perpandicular direction (c lattice in the I4/mmm setting), the As-As distance between NiAs layers is abruptly reduced above 10 GPa, which is reminiscent of the first-order transition to cT phases in other iron <cit.> or cobalt <cit.> pnictide families. However, the As-As distance within the NiAs layers increases (or equivalently the As-Ni-As angle decreases), showing that the NiAs layers become thicker in the high-pressure phase. This can only be evidenced by looking carefully at the bond distances since overall the unit cell size perpendicular to the Ni planes decreases. The isobar measurements at ∼ 12 indicate an absence of transition upon cooling at this pressure between 200 K and 50 K <cit.>.
§.§ Pressure dependence of the C-CDW: 94 isotherm
Next, we look at the impact of pressure on the triclinic phase of where the C-CDW is seen at ambient pressure down to the lowest temperatures <cit.>. Previous studies indicate that in this phase the four Ni-Ni bond distances become nonequivalent forming in-plane Ni-Ni dimers <cit.>, as can be derived from the different Ni-Ni bond distances in Fig. <ref> a).
Before discussing the evolution of the crystal structure, let us first focus on the pressure dependence of the C-CDW superstructure (we recall here that for simplicity, the corresponding reflections are indexed in the tetragonal setting). In the triclinic phase, the characteristic set of C-CDW satellites with wavevectors (±1/30∓ 1/3) is still clearly visible in the (H2 L) plane at 2.11 (compare Fig. <ref> b). A new set of C-CDW (C-CDW2) superstructure peaks are observed at 3.75 around wavevectors ( ± 1/2 0 ∓ 1/2). Note that at this pressure, weak signatures of the C-CDW1 satellites can still be seen indicating a narrow coexistence region of the two orders. The C-CDW1 satellites are completely suppressed with the increasing pressure, while the C-CDW2 remain visible up to 9.5. Above 10 as for 140 K isotherm, no superstructure reflections could be observed.
From a structural point of view, and as previously discussed, the high-pressure phase above 10 GPa is best described as a 'collapsed' monoclinic C2/m structure, with reduced As-As distance for the As ions connecting the Ni-As layers.
In contrast with the situation at higher temperatures, however, evaluating the structure by including the C-CDW2 modulation in the refinement yields poor results when describing it with the C2/m symmetry. On the other hand, it is quite clear as well that the triclinic P1̅ phase is suppressed alongside the C-CDW1 phase above 2.5. The best structural solution for this intermediate pressure phase (i.e between 3.75 and 9.5) is obtained including the C-CDW2 superstructure reflections explicitly for solving the structure in the monoclinic C2/c space group (this corresponds in particular to a doubling of the unit cell in the plane, in which three unequivalent Ni-Ni bonds are found).
The transition between the two monoclinic phases with C2/m and C2/c space groups occurs around 10 at 94, where the monoclinic β angle and the ratio of the a- and b-lattice parameters (Fig. <ref> a) exhibit clear discontinuities.
All these transitions are first-order in nature and, in contrast to the situation at higher temperatures where the symmetry of the unit cell was lowered with increasing pressure, symmetries are restored under pressurization at low temperature.
§.§ Pressure-Temperature phase diagram and discussion
We illustrate the results of our analysis of the crystal structures and superstructures of for each of the >100 points in the pressure-temperature measured and presented in a detailed phase diagram on Fig. <ref>. Crystallographic parameters for each structure are given in Tab. <ref> and in the SM <cit.>.
The first important observation is that the phase diagram shows a qualitatively different pressure dependence for the high (orthorhombic and I-CDW) and low (triclinic and C-CDW) temperature phases.
While the low temperature triclinic phase is lost already between 2 and 3, the orthorhombic phase around 140 survives up to ∼7. We have seen that this is accompanied with a continuous change of the incommensurability of the I-CDW1 with increasing pressure, whose onset temperature does nonetheless not seem to strongly vary with pressure. This is also the case for the first-order transition temperature to the triclinic/C-CDW1 phase below ca. 3, in agreement with an earlier transport study <cit.>.
This independence of the CDW formation temperatures over large pressure ranges contrasts significantly with the previously reported effects of chemical substitutions. This is particularly true in the case of the C-CDW and triclinic phases which are gradually suppressed through substitution e.g. by phosphorus, cobalt or strontium <cit.>. This can be best understood by looking in more details at the effect of these substitutions on the structure, which tend to have opposite effects in- and out-of-plane, in contrast to hydrostatic pressure that compresses all lattice parameters.
The main effect of P or Co substitutions which efficiently suppress the triclinic and C-CDW1 phase is a contraction of the ab plane<cit.>. On the contrary, Sr mostly induces a compression of the c axis <cit.> and interestingly induces a commensurate CDW with a doubling of the unit cell <cit.>, which bears similarities with the one reported here. Although to the best of our knowledge this has not been associated with a structural phase transition to a C2/c phase so far, it reemphasizes the importance of the c-axis parameter in controlling the electronic phase of pnictides, akin to their Fe-based counterpart <cit.>
We note that the stability of the I-CDW1 up to 7 contrasts with the observations in the vast majority of known CDW materials. For instance, in rare earth tritellurides RTe_3 <cit.>, dichalcogenides such as 2H-NbSe_2 <cit.> or TiSe_2 <cit.>, α-U <cit.> or Kagome superconductors <cit.> to cite a few, the CDW ordering temperature is strongly dependent on pressure and most often decreases rapidly with increasing pressure, generally resulting in a complete suppression of the CDW after a few GPa. There are of course notable exceptions to this, such as VSe_2 <cit.>, or SmNiC_2 <cit.>, but there the CDW formation temperature rapidly increases with pressure. In this respect and to the best of our knowledge, the resilience of the I-CDW1 in against pressure is particularly remarkable. It might be related to the nematic liquid phenomenology evidenced at higher temperature in this compound <cit.> as a consequence of strong fluctuations between degenerate nematic configurations and expected to be weakly affected by strain. Interestingly, the dramatic changes of the CDWs are concomitant with pressure-induced structural phase transitions. The formation of Ni zigzag chains yielding a monoclinic C2/m structure above 7 at high temperatures or a C2/c structure above 3 at low temperatures is associated with a remarkable change of the superstructure pattern, indicating a profound interdependence of the CDW instabilities and of the underlying lattice structure. To gain further insights, we turn now to first-principle calculations, which are particularly favourable owing to the weakly correlated nature of the material.
The structures reported in this study have not been anticipated by previous theoretical investigations <cit.> as it is generally challenging to determine a priori the symmetry of the most stable structural configuration of a given compound.
It is nonetheless possible to assess the stability of the experimentally determined crystal structures by looking at their lattice dynamics. Using the experimental lattice parameters and relaxed atomic positions to obtain force-free configurations prior to the phonon calculations, we have previously shown <cit.> that the dispersion of the phonons of the I/4mmm tetragonal structure was unstable against the softening of a low-lying optical phonon (dispersing from the Raman-active E_g mode at the zone center) along the reciprocal (H00) direction and at a wavevector close to that of the experimental I-CDW1.
We have extended this approach to the pressurized unit cells. On the color plots of Fig. <ref>, we have mapped the lowest phonon frequencies (full dispersions are shown in the SM <cit.>) across planes of the reciprocal space. As unstable modes are characterized by imaginary frequencies, the negative modulus of the frequency was used so that the dominant instabilities show up as minima of the softest phonon frequency in Fig <ref>.
In agreement with previous work, the calculation performed on the weakly distorted orthorhombic Immm structures determined at 0.3 and 5.1 indicates that the leading phonon instability occurs at (0.25, 0, 0) and (0, 0.25, 0), close to the I-CDW1 wavevector. Interestingly, at both pressures, we can already observe a weak softening of the same phonon branch close to the wavevector at 8 locations in the (H K0) plane including e.g. (0.380.10) or (0.10.380), which are very close to that at which the I-CDW2 satellites (Fig. <ref>) have been observed. This becomes the leading instability at 10.14 calculated within the monoclinic C2/m phase (Fig. <ref>-c), while upon further compression the phonon anomalies are suppressed and this phase is stabilized, as evidenced by the disappearance of negative phonon energies in Fig. <ref>-d).
Next, we discuss the instabilities of the low temperature phases. The DFPT calculation on the experimental triclinic P1̅ structure (shown for 1.69 in Figs. <ref>-e) and f)) is also found unstable against the softening of the same low-lying phonon branch, but the leading instability is now found in the (H0 L) plane. It is rather spread in the reciprocal space but centered around the (1/301/3) and (2/302/3) wavevectors, at which C-CDW1 satellites are seen experimentally (Fig. <ref>-b). Similarly, the leading instability of the C2/c monoclinic structure at 5.79 shown in Figs. <ref>-g) and h) occurs at the commensurate wavevector (1/2, 0, 1/2) of the C-CDW2 phase (Figs. <ref>-b) and -c)).
In all these cases and similar to investigations at ambient pressure <cit.>, no Fermi surface nesting is found at the I-CDW1, I-CDW2, C-CDW1 or C-CDW2 wavevectors (details are presented in the SM <cit.>).
In general, we do not observe any anomaly in the phonon linewidth associated with the momentum structure of the electron-phonon coupling vertex that correlates with the structure of these CDWs, indicating the unconventional natures of these CDWs. The only noticeable exception is the I-CDW2 case, for which a weak enhancement of the linewidth of the unstable phonon is seen, but the calculated electron-phonon coupling remains very modest. It typically amounts to ∼ 0.15 meV, which is almost an order of magnitude weaker than in that of prototypical CDW systems such as dichalcogenides <cit.>.
To sum up, whether the lattice structure of is stable against CDW appears fully controlled by the local environment of Ni, and thereby by the orbital polarization of the bands crossing the Fermi level which primarily derive from Ni states <cit.>. On the one hand, it is clear that the deformation of the Ni square lattice into a zigzag structure with a bond length disproportionation can only occur alongside a spectral weight transfer between the in- and out-of-plane t_2g orbitals of Ni <cit.>. On the other hand, the main player yielding the disappearance of CDWs above 10 GPa, are the As atoms surrounding the planar Ni zigzag chains, which are pushed away as the interlayer As-As distance is strongly reduced. Our first principle calculation indicates that the electron-phonon interaction is extremely sensitive to the subtle details of the hybridization between the As 4p and Ni 3d orbitals, primarily controlled by the Ni-As and As-As distances. Despite the weak spectral weight of As states at the Fermi level, they seem to play a key role in controlling the electronic phase of .
§ SUMMARY AND OUTLOOK
In summary, we have investigated the pressure dependence of the crystal structure and CDWs of superconducting and revealed the formation of new structural polymorphs and CDWs. At high pressure, a monoclinic phase exhibiting planar Ni zigzags forms and is stable against CDW instabilities. A detailed phase diagram of has been determined and revealed highly unusual pressure dependence of the incommensurate and commensurate CDW phases of this compound. First-principle calculations fueled by the experimental crystal structures show a series of lattice instabilities in a very good agreement with the experimentally observed ones. The stable monoclinic high-pressure phase shows a strongly reduced interlayer As-As distance, bearing striking similarities with previously encountered collapsed tetragonal phases in pnictides, highlighting the importance of the hybrization between As and Ni orbitals in controlling the electronic phases of these compounds. This calls for additional investigations, in particular regarding the impact of the reported structural phase transitions on the superconducting transition temperature of .
Note added. During the completion of this manuscript, we became aware of another high-pressure study of BaNi_2As_2 <cit.>.
Acknowledgements
We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were carried out at PETRA III using beamline P02.2. Beamtime was allocated for proposal I-20200263.
We acknowledge the European Synchrotron Radiation Facility (ESRF) for provision of synchrotron radiation facilities and we would like to thank D. Comboni and T. Poreba for assistance and support in using beamline ID15B. We acknowledge the funding by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) Project-ID 422213477-TRR 288 (Project B03) and support by the state of Baden-Württemberg through bwHPC. S.M.S. acknowledges funding by the Deutsche Forschungsgemeinschaft-Projektnummer 441231589.
|
http://arxiv.org/abs/2307.05787v1 | 20230711202846 | DHYM instantons on higher rank holomorphic vector bundles over ${\mathbb{P}}(T_{{\mathbb{P}^{2}}})$ | [
"Eder M. Correa"
] | math.AG | [
"math.AG",
"math.DG"
] |
Eder M. Correa]DHYM instantons on higher rank holomorphic vector bundles over P(T_P^2)
IMECC-Unicamp, Departamento de Matemática. Rua Sérgio Buarque de Holanda 651, Cidade Universitária Zeferino Vaz. 13083-859, Campinas-SP, Brazil
E-mail: [email protected]
We construct the first explicit non-trivial example of deformed Hermitian Yang-Mills (dHYM) instanton on a higher rank slope-unstable holomorphic vector bundle over a Fano threefold. Additionally, we provide a sufficient algebraic condition in terms of central charges for the existence of dHYM instantons on Whitney sum of holomorphic line bundles over rational homogeneous varieties. As a consequence, we obtain several new examples of dHYM instantons on higher rank holomorphic vector bundles.
[
Eder M. Correa
August 12, 2023
===================
§ INTRODUCTION
Let (X,ω) be a compact connected Kähler manifold, such that _C(X) = n, and [ψ] ∈ H^1,1(X,R). The deformed Hermitian Yang-Mills (dHYM) equation asks for a canonical representative χ∈ [ψ] satisfying
Im ( ω + √(-1)χ)^n = tan(Θ̂) Re ( ω + √(-1)χ)^n,
such that Θ̂ = Arg∫_X(ω + √(-1)ψ)^n/n! (mod 2π). This equation was originally derived in the physics literature for the case of line bundles, e.g. <cit.>, <cit.>, from a mathematical perspective, the equation arises through SYZ mirror symmetry as the dual to the special Lagrangian equation on a Calabi–Yau manifold. The analytical study of the dHYM equation was initiated by Jacob–Yau <cit.> and further explored in a series of works <cit.>, see also <cit.>, <cit.>, and <cit.>. As it can be shown, Eq. (<ref>) has an alternative (equivalent) formulation in terms of the notion of Lagrangian phase <cit.>, more precisely, Eq. (<ref>) is equivalent to the fully nonlinear elliptic equation
Θ_ω(χ) := ∑_j = 1^narctan(λ_j) = Θ̂ (mod 2π),
where λ_1,…,λ_n are the eigenvalues of ω^-1∘χ. In this last equation Θ_ω(χ) is called the Lagrangian phase of χ with respect to ω. In <cit.>, by describing explicitly the Lagrangian phase of every closed invariant real (1,1)-form in terms of Lie theory, the author proved that the dHYM equation always admits a solution if (X,ω) is a rational homogeneous variety. In higher rank, we have the following generalization suggested by Collins–Yau <cit.>: Given a holomorphic vector bundle E→ (X,ω), we say that a Hermitian metric h on E solves the dHYM equation if the curvature F_∇ of the associated Chern connection ∇ of H satisfies
Im ( e^-√(-1)Θ̂(E) (ω⊗1_E - 1/2πF_∇ )^n ) = 0,
such that
Θ̂(E) = Arg∫_Xtr (ω⊗1_E - 1/2πF_∇)^n (mod 2π).
In the above setting we call ∇ a dHYM instanton. There are by now in the literature many important results concerned with dHYM instantons on holomorphic line bundles, and few results are known in the higher rank setting. In <cit.>, strong mathematical justification that the higher rank deformed Hermitian Yang-Mills equation suggested by Collins-Yau is indeed the appropriate equation was provided. Moreover, among other results, in <cit.> the authors introduced the notion of Z-critical connection and proved that, in the large volume limit, a sufficiently smooth holomorphic vector bundle <cit.> admits a Z-critical connection if and only if it is asymptotically Z-stable. In this paper, we study Eq. (<ref>) on holomorphic vector bundles over rational homogeneous varieties equipped with some invariant Kähler metric. A rational homogeneous variety can be described as a quotient X_P = G^C/P, where G^C is a semisimple complex algebraic group with Lie algebra 𝔤^C = Lie(G^C), and P is a parabolic Lie subgroup (Borel-Remmert <cit.>). Regarding G^C as a complex analytic space, without loss of generality, we may assume that G^C is a connected simply connected complex simple Lie group. Fixed a compact real form G ⊂ G^C, one can consider X_P = G/G ∩ P as a G-space. In order to state our first result, let us recall some terminology. Fixed a Kähler class ξ∈𝒦(X_P), the slope μ_ξ(E) of a holomorphic vector bundle E→ X_P, with respect to ξ, is defined by
μ_ξ(E):= ∫_X_Pc_1(E) ∧ξ^n-1/(E).
From above, a holomorphic vector bundle E→ X_P is said to be slope-(semi)stable if
μ_ξ(E) μ_ξ(ℱ),
for every subbundle 0 ≠ℱE. Further, we say that E is slope-polystable if it is isomorphic to a direct sum of stable vector bundles of the same slope, and we say that E is slope-unstable if it is not slope-semistable. Considering the rational homogeneous Fano threefold P(T_P^2), motivated by <cit.>, <cit.>, and by the increasing interest in the existence of Bridgeland stability conditions on Fano threefolds <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, we prove the following theorem.
Fixed the unique SU(3)-invariant Kähler metric ω_0∈ c_1(P(T_P^2)), then there exist holomorphic vector bundles E_j→P(T_P^2), j=1,2,3, satisfying the following:
* (E_j) > 1, j = 1,2,3;
* E_1 admits a Hermitian structure h with associated Chern connection ∇ satisfying
√(-1)Λ_ω_0(F_∇) = c1_E_1,
Im (e^-√(-1)Θ̂(E_1) (ω_0⊗1_E_1 - 1/2πF_∇ )^3 ) = 0,
such that c = π/8μ_[ω_0](E_1) and Θ̂(E_1) = Arg∫_P(T_P^2)tr (ω_0⊗1_E_1 - 1/2πF_∇)^3 (mod 2π).
* E_2 admits a Hermitian structure h with associated Chern connection ∇ satisfying
√(-1)Λ_ω_0(F_∇) = c1_E_2,
Im (e^-√(-1)Θ̂(E_2) (ω_0⊗1_E_2 - 1/2πF_∇ )^3 ) ≠ 0,
for c = π/8μ_[ω_0](E_2) and for all Θ̂(E_2) ∈R.
* E_3 admits a Hermitian structure h with associated Chern connection ∇ satisfying
√(-1)Λ_ω_0(F_∇) ≠ c1_E_3,
Im (e^-√(-1)Θ̂(E_3) (ω_0⊗1_E_3 - 1/2πF_∇ )^3 ) = 0,
for all c ∈R, such that Θ̂(E_3) = = Arg∫_P(T_P^2)tr (ω_0⊗1_E_3 - 1/2πF_∇)^3 (mod 2π).
In particular, E_1 and E_2 are slope-polystable, and E_3 is slope-unstable.
In order to prove the above result, we construct explicitly the Hermitian connections satisfying the aforementioned properties. In particular, from item (4) of the above theorem, we obtain the first explicit non-trivial example of dHYM instanton on a higher rank slope-unstable holomorphic vector bundle over a Fano threefold. The first non-trivial solutions to the dHYM equation and Z-critical equation in a higher rank (semistable) holomorphic vector bundle were provided in <cit.> for X = P^2, results related to the existence of dHYM instantons on slope-unstable holomorphic vector bundles are not known so far. It is worth mentioning that our construction is independent of the results provided in <cit.>.
Given a rational homogeneous variety X_P, let ω_0 be a G-invariant Kähler metric on X_P. In this setting, for every Hermitian holomorphic vector bundle (E,h) → (X_P,ω_0), denoting by ∇ the associated Chern connection, and considering the representative ch_k(E,∇) of the cohomology class defined by the k-th Chern character ch_k(E) ∈ H^2k(X_P,C), we define the central charge
Z_[ω_0](E) = -∫_X_Pe^-√(-1)[ω_0]ch(E) = -∑_j=0^n(-√(-1))^j/j!∫_X_Pω_0^j∧ch_n-j(E,∇).
As mentioned in <cit.>, the complex number Z_[ω_0](-) resembles various notions of central charge appearing in the study of stability conditions in several physical and mathematical theories, see for instance <cit.> and <cit.>. In the particular case that E is a line bundle, it is conjectured <cit.> that the existence of a dHYM instanton ∇ on E should be equivalent to the Bridgeland stability <cit.> of E, for an introduction to Bridgeland stability we suggest <cit.>. We have by now several results related to algebraic conditions involving certain central charges and intersection numbers with the existence of the dHYM instantons on holomorphic line bundles, e.g. <cit.>, <cit.>, <cit.>, <cit.>. Motivated by these results and by the ideas introduced in <cit.>, we derive a sufficient algebraic condition in terms of the central charge given in Eq. (<ref>) for the existence of dHYM instantons on Whitney sum of holomorphic line bundles over rational homogeneous varieties. More precisely, we prove the following.
Given E, F∈Pic(X_P), if
Im ( Z_[ω_0](E)/Z_[ω_0](F) ) = 0,
then E⊕F admits a dHYM instanton.
The above result allows us to construct examples of higher rank dHYM instantons from elements of Pic(X_P). Notice that, since ch(E) = e^c_1(E), ∀E∈Pic(X_P), it follows that
Z_[ω_0](E) = -(-√(-1))^n/n!∫_X_P([ω_0] + √(-1)c_1(E))^n, ∀E∈Pic(X_P).
Following <cit.>, the integral which appears on the right-hand side of the above equation can be explicitly computed in concrete cases using tools from Lie theory. As an application of this last fact, and the ideas introduced in Theorem <ref> and Theorem <ref>, we explore the relationship between dHYM instantons and Z-critical connections <cit.>. More precisely, in the setting of Eq. (<ref>), considering the End(E)-valued (n,n)-form
Z_ω_0(E,∇) := -∑_j=0^n(-√(-1))^j/j!ω_0^j∧ch_n-j(E,∇),
such that[Notice that tr(ch(E,∇)) is a representative for the cohomology class defined by the Chern character ch(E).]
ch(E,∇) := exp ( √(-1)/2πF_∇ ) = ch_0(E,∇) + ch_1(E,∇) + ⋯ + ch_n(E,∇),
we say that ∇ is a Z-critical connection <cit.> if
Im (e^-√(-1)φ(E)Z_ω_0(E,∇) ) = 0,
such that φ(E) = Arg(Z_[ω_0](E) ) (mod 2π). In this setting, we prove the following.
Under the hypotheses of Theorem <ref>, for every integer r > 1, there exists a Hermitian holomorphic vector bundle (E,h) → (P(T_P^2),ω_0), such that (E) = r, and the following hold:
* E is slope-unstable and h^0(P(T_P^2),End(E)) > 1;
* considering φ(E) = Arg(Z_[ω_0](E) ) (mod 2π), we have
Im (e^-√(-1)φ(E)Z_ω_0(E,∇) ) = 0,
where ∇ is the Chern connection associated with h.
It is worth to point out that Eq. (<ref>) is equivalent to Eq. (<ref>). Therefore, from the above theorem we also obtain several new examples of dHYM instantons on higher rank holomorphic vector bundles. The proof of the above result provides a constructive method to obtain examples of Z-critical connections on slope-unstable not simple holomorphic vector bundles over P(T_P^2). In particular, Theorem <ref> provides the first example in the literature of a Z-critical connection on a slope-unstable not simple holomorphic vector bundle over a Fano threefold.
Organization of the paper. In Section 2, we review some basic generalities on flag varieties. In Section 3, we prove Theorem <ref>, the proof is constructive and can be summarized in Lemma <ref> (Section <ref>), Lemma <ref> (Section <ref>), and Lemma <ref> (Section <ref>). In Section <ref>, we prove Theorem <ref> and Theorem <ref>.
Acknowledgments. The author would like to thank Professor Lino Grama for very helpful conversations. E. M. Correa is partially supported by FAEPEX/Unicamp grant 2528/22 and by São Paulo Research Foundation FAPESP grant 2022/10429-3.
§ GENERALITIES ON FLAG VARIETIES
In this section, we review some basic generalities about flag varieties. For more details on the subject presented in this section, we suggest <cit.>, <cit.>, <cit.>, <cit.>.
§.§ The Picard group of flag varieties
Let G^C be a connected, simply connected, and complex Lie group with simple Lie algebra 𝔤^C. By fixing a Cartan subalgebra 𝔥 and a simple root system Δ⊂𝔥^∗, we have a decomposition of 𝔤^C given by
𝔤^C = 𝔫^-⊕𝔥⊕𝔫^+,
where 𝔫^- = ∑_α∈Φ^-𝔤_α and 𝔫^+ = ∑_α∈Φ^+𝔤_α, here we denote by Φ = Φ^+∪Φ^- the root system associated with the simple root system Δ⊂𝔥^∗. Let us denote by κ the Cartan-Killing form of 𝔤^C. From this, for every α∈Φ^+, we have h_α∈𝔥, such that α = κ(·,h_α), and we can choose x_α∈𝔤_α and y_-α∈𝔤_-α, such that [x_α,y_-α] = h_α. From these data, we can define a Borel subalgebra[A maximal solvable subalgebra of 𝔤^C.] by setting 𝔟 = 𝔥⊕𝔫^+.
In the above setting, ∀ϕ∈𝔥^∗, we also denote ⟨ϕ, α⟩ = ϕ(h_α), ∀α∈Φ^+.
Now we consider the following result (see for instance <cit.>, <cit.>):
Any two Borel subgroups are conjugate.
From the result above, given a Borel subgroup B ⊂ G^C, up to conjugation, we can always suppose that B = exp(𝔟). In this setting, given a parabolic Lie subgroup[A Lie subgroup which contains some Borel subgroup.] P ⊂ G^C, without loss of generality, we can suppose that
P = P_I, for some I ⊂Δ,
where P_I⊂ G^C is the parabolic subgroup which integrates the Lie subalgebra
𝔭_I = 𝔫^+⊕𝔥⊕𝔫(I)^-, with 𝔫(I)^- = ∑_α∈⟨ I ⟩^-𝔤_α.
By definition, we have that P_I = N_G^C(𝔭_I), where N_G^C(𝔭_I) is the normalizer in G^C of 𝔭_I⊂𝔤^C, see for instance <cit.>. In what follows, it will be useful for us to consider the following basic chain of Lie subgroups
T^C⊂ B ⊂ P ⊂ G^C.
For each element in the aforementioned chain of Lie subgroups we have the following characterization:
* T^C = exp(𝔥); (complex torus)
* B = N^+T^C, where N^+ = exp(𝔫^+); (Borel subgroup)
* P = P_I = N_G^C(𝔭_I), for some I ⊂Δ⊂𝔥^∗. (parabolic subgroup)
Now let us recall some basic facts about the representation theory of 𝔤^C, a detailed exposition on the subject can be found in <cit.>. For every α∈Φ, we set
α^∨ := 2/⟨α, α⟩α.
The fundamental weights {ϖ_α | α∈Δ}⊂𝔥^∗ of (𝔤^C,𝔥) are defined by requiring that ⟨ϖ_α, β^∨⟩= δ_αβ, ∀α, β∈Δ. We denote by
Λ^+ = ⊕_α∈ΔZ_≥ 0ϖ_α,
the set of integral dominant weights of 𝔤^C. Let V be an arbitrary finite dimensional 𝔤^C-module. By considering its weight space decomposition
V = ⊕_μ∈Φ(V)V_μ,
such that V_μ = {v ∈ V | h · v = μ(h)v, ∀ h ∈𝔥}≠{0}, ∀μ∈Φ(V) ⊂𝔥^∗, we have the following definition.
A highest weight vector (of weight λ) in a 𝔤^C-module V is a non-zero vector v_λ^+∈ V_λ, such that
x · v_λ^+ = 0, (∀ x ∈𝔫^+).
A weight λ∈Φ(V) associated with a highest weight vector is called highest weight of V.
From above, we consider the following standard results.
Every finite dimensional irreducible 𝔤^C-module V admits a highest weight vector v_λ^+. Moreover, v_λ^+ is the unique highest weight vector of V, up to non-zero scalar multiples.
Let V and W be finite dimensional irreducible 𝔤^C-modules with highest weight λ∈𝔥^∗. Then, V and W are isomorphic.
We will denote by V(λ) a finite dimensional irreducible 𝔤^C-module with highest weight λ∈𝔥^∗.
In the above setting, the following hold:
(1) If V is a finite dimensional irreducible 𝔤^C-module with highest weight λ∈𝔥^∗, then λ∈Λ^+.
(2) If λ∈Λ^+, then there exists a finite dimensional irreducible 𝔤^C-module V, such that V = V(λ).
From the above theorem, it follows that the map λ↦ V(λ) induces an one-to-one correspondence between Λ^+ and the isomorphism classes of finite dimensional irreducible 𝔤^C-modules.
In what follows, it will be useful also to consider the following facts:
(i) For all λ∈Λ^+, we have V(λ) = 𝔘(𝔤^C) · v_λ^+, where 𝔘(𝔤^C) is the universal enveloping algebra of 𝔤^C;
(ii) The fundamental representations are defined by V(ϖ_α), α∈Δ;
(iii) For all λ∈Λ^+, we have the following equivalence of induced irreducible representations
ϱ G^C→GL(V(λ)) ⟺ ϱ_∗𝔤^C→𝔤𝔩(V(λ)),
such that ϱ(exp(x)) = exp(ϱ_∗x), ∀ x ∈𝔤^C, notice that G^C = ⟨exp(𝔤^C) ⟩.
Given a representation ϱ G^C→GL(V(λ)), for the sake of simplicity, we shall denote ϱ(g)v = gv, for all g ∈ G^C, and all v ∈ V(λ). Let G ⊂ G^C be a compact real form for G^C. Given a complex flag variety X_P = G^C/P, regarding X_P as a homogeneous G-space, that is, X_P = G/G∩ P, the following theorem allows us to describe all G-invariant Kähler structures on X_P through elements of representation theory.
Let ω∈Ω^1,1(X_P)^G be a closed invariant real (1,1)-form, then we have
π^∗ω = √(-1)∂∂φ,
where π G^C→ X_P is the natural projection, and φ G^C→R is given by
φ(g) = ∑_α∈Δ\ Ic_αlog (||gv_ϖ_α^+|| ), (∀ g ∈ G^C)
with c_α∈R, ∀α∈Δ\ I. Conversely, every function φ as above defines a closed invariant real (1,1)-form ω_φ∈Ω^1,1(X_P)^G. Moreover, ω_φ defines a G-invariant Kähler form on X_P if and only if c_α > 0, ∀α∈Δ\ I.
It is worth pointing out that the norm || · || considered in the above theorem is a norm induced from some fixed G-invariant inner product ⟨·, ·⟩_α on V(ϖ_α), ∀α∈Δ\ I.
An important consequence of Theorem <ref> is that it allows us to describe the local Kähler potential for any homogeneous Kähler metric in a quite concrete way, for some examples of explicit computations, we suggest <cit.>, <cit.>.
By means of the above theorem we can describe the unique G-invariant representative of each integral class in H^2(X_P,Z). In fact, consider the holomorphic P-principal bundle P ↪ G^C→ X_P. By choosing a trivializing open covering X_P = ⋃_i ∈ JU_i, in terms of Čech cocycles we can write
G^C = {(U_i)_i ∈ J, ψ_ij U_i∩ U_j→ P }.
Given ϖ_α∈Λ^+, we consider the induced character ϑ_ϖ_α∈Hom(T^C,C^×), such that (dϑ_ϖ_α)_e = ϖ_α. Since P = P_I, we have the decomposition
P_I = [P_I,P_I]T(Δ\ I)^C, such that T(Δ\ I)^C = exp{∑_α∈Δ\ Ia_αh_α | a_α∈C},
e.g. <cit.>, so we can consider the extension ϑ_ϖ_α∈Hom(P,C^×). From the homomorphism ϑ_ϖ_α P →C^× one can equip C with a structure of P-space, such that pz = ϑ_ϖ_α(p)^-1z, ∀ p ∈ P, and ∀ z ∈C. Denoting by C_-ϖ_α this P-space, we can form an associated holomorphic line bundle 𝒪_α(1) = G^C×_PC_-ϖ_α, which can be described in terms of Čech cocycles by
𝒪_α(1) = {(U_i)_i ∈ J,ϑ_ϖ_α^-1∘ψ_i j U_i∩ U_j→C^×},
that is, 𝒪_α(1) = {g_ij}∈Ȟ^1(X_P,𝒪_X_P^∗), such that g_ij = ϑ_ϖ_α^-1∘ψ_i j, ∀ i,j ∈ J.
Given a parabolic Lie subgroup P ⊂ G^C, such that P = P_I, for some I ⊂Δ, the decomposition Eq. (<ref>) shows us that Hom(P,C^×) = Hom(T(Δ\ I)^C,C^×). Therefore, if we take ϖ_α∈Λ^+, such that α∈ I, it follows that 𝒪_α(1) = X_P×C, i.e., the associated holomorphic line bundle 𝒪_α(1) is trivial.
Throughout this paper we shall use the following notation
𝒪_α(k) := 𝒪_α(1)^⊗ k,
for every k ∈Z and every α∈Δ\ I.
Given 𝒪_α(1) ∈Pic(X_P), such that α∈Δ\ I, as described above, if we consider an open covering X_P = ⋃_i ∈ J U_i which trivializes both P ↪ G^C→ X_P and 𝒪_α(1) → X_P, by taking a collection of local sections (s_i)_i ∈ J, such that s_i U_i→ G^C, we can define q_i U_i→R^+, such that
q_i := 1/||s_iv_ϖ_α^+||^2,
for every i ∈ J. Since s_j = s_iψ_ij on U_i∩ U_j≠∅, and pv_ϖ_α^+ = ϑ_ϖ_α(p)v_ϖ_α^+, for every p ∈ P, and every α∈Δ\ I, the collection of functions (q_i)_i ∈ J satisfy q_j = |ϑ_ϖ_α^-1∘ψ_ij|^2q_i on U_i∩ U_j≠∅. Hence, we obtain a collection of functions (q_i)_i ∈ J which satisfies on the overlaps U_i∩ U_j≠∅ the following relation
q_j = |g_ij|^2q_i,
such that g_ij = ϑ_ϖ_α^-1∘ψ_i j, ∀ i,j ∈ J. From this, we can define a Hermitian structure h on 𝒪_α(1) by taking on each trivialization f_i𝒪_α(1)|_U_i→ U_i×C the metric defined by
h(f_i^-1(x,v),f_i^-1(x,w)) = q_i(x) vw,
for every (x,v),(x,w) ∈ U_i×C. The Hermitian metric above induces a Chern connection ∇d + ∂logh with curvature F_∇ satisfying (locally)
√(-1)/2πF_∇√(-1)/2π∂∂log ( | | s_iv_ϖ_α^+ | |^2).
Therefore, by considering the closed G-invariant (1,1)-form Ω_α∈Ω^1,1(X_P)^G, which satisfies π^∗Ω_α = √(-1)∂∂φ_ϖ_α, where π G^C→ G^C / P = X_P, and φ_ϖ_α(g) = 1/2πlog||gv_ϖ_α^+||^2, ∀ g ∈ G^C, we have
Ω_α |_U_i = (π∘ s_i)^∗Ω_α = √(-1)/2πF_∇ |_U_i,
i.e., c_1(𝒪_α(1)) = [ Ω_α], ∀α∈Δ\ I.
Given I ⊂Δ, we shall denote Φ_I^±:= Φ^±\⟨ I ⟩^±, such that ⟨ I ⟩^± = ⟨ I ⟩∩Φ^±.
In order to perform some local computations we shall consider the open set U^-(P) ⊂ X_P defined by the “opposite" big cell in X_P. This open set is a distinguished coordinate neighbourhood U^-(P) ⊂ X_P of x_0 = eP ∈ X_P defined as follows
U^-(P) = B^-x_0 = R_u(P_I)^-x_0⊂ X_P,
where B^- = exp(𝔥⊕𝔫^-), and
R_u(P_I)^- = ∏_α∈Φ_I^+N_α^-, (opposite unipotent radical)
with N_α^- = exp(𝔤_-α), ∀α∈Φ_I^+, e.g. <cit.>,<cit.>. It is worth mentioning that the opposite big cell defines a contractible open dense subset in X_P, thus the restriction of any vector bundle (principal bundle) over this open set is trivial.
Consider now the following result.
Consider P_β^1 = exp(𝔤_-β)x_0⊂ X_P, such that β∈Φ_I^+. Then,
∫_P_β^1Ω_α = ⟨ϖ_α, β^∨⟩, ∀α∈Δ\ I.
A proof of the above result can be found in <cit.>, see also <cit.> and <cit.>. From the above lemma and Theorem <ref>, we obtain the following fundamental result.
Let X_P be a complex flag variety associated with some parabolic Lie subgroup P = P_I. Then, we have
Pic(X_P) = H^1,1(X_P,Z) = H^2(X_P,Z) = ⊕_α∈Δ\ IZ[Ω_α ].
In the above setting, we consider the weights of P = P_I as being
Λ_P := ⊕_α∈Δ\ IZϖ_α.
From this, the previous result provides Λ_P≅Hom(P,C^×) ≅Pic(X_P), such that
* λ = ∑_α∈Δ\ Ik_αϖ_α↦∏_α∈Δ\ Iϑ_ϖ_α^k_α↦⊗_α∈Δ\ I𝒪_α(k_α);
* E↦ϑ_E: = ∏_α∈Δ\ Iϑ_ϖ_α^⟨ c_1(L),[P^1_α] ⟩↦λ(E) := ∑_α∈Δ\ I⟨ c_1(E),[P^1_α] ⟩ϖ_α.
Thus, ∀E∈Pic(X_P), we have λ(E) ∈Λ_P. More generally, ∀ξ∈ H^1,1(X_P,R), we have λ (ξ) ∈Λ_P⊗R, such that
λ(ξ) := ∑_α∈Δ\ I⟨ξ,[P^1_α] ⟩ϖ_α.
From above, for every holomorphic vector bundle E→ X_P, we define λ(E) ∈Λ_P, such that
λ(E) := ∑_α∈Δ\ I⟨ c_1(E),[P_α^1] ⟩ϖ_α,
where c_1(E) = c_1(⋀^rE), such that r = (E).
Given any G-invariant Riemannian metric g on X_P, let us denote by ℋ^2(X_P,g) the space of real harmonic 2-forms on X_P with respect to g, and by ℐ_G^1,1(X_P) the space of closed invariant real (1,1)-forms. Combining the result of Proposition <ref> with <cit.>, we obtain
ℐ_G^1,1(X_P) = ℋ^2(X_P,g).
Therefore, the closed G-invariant real (1,1)-forms described in Theorem <ref> are harmonic with respect to any G-invariant Riemannian metric on X_P.
It follows from Eq. (<ref>) and Theorem <ref> that the Kähler cone of a complex flag variety X_P is given explicitly by
𝒦(X_P) = ⊕_α∈Δ\ IR^+[ Ω_α].
It is worth observing that the cone of curves NE(X_P) of a flag variety X_P is generated by the rational curves [P_α^1] ∈π_2(X_P), α∈Δ\ I, see for instance <cit.> and references therein.
Let X_P be a flag variety and let ω_0 be a G-invariant Kähler metric on X_P. Then, for every closed G-invariant real (1,1)-form ψ, the eigenvalues of the endomorphism ω_0^-1∘ψ are given by
q_β(ω_0^-1∘ψ) = ⟨λ([ψ]), β^∨⟩/⟨λ([ω_0]), β^∨⟩, β∈Φ_I^+.
A proof for the above result can be found in <cit.>.
In the setting of the last proposition, since n ψ∧ω_0^n-1 = Λ_ω_0(ψ)ω_0^n, such that n=_C(X_P), and Λ_ω_0(ψ)=tr(ω_0^-1∘ψ), it follows that
Λ_ω_0(Ω_α)=∑_β∈Φ_I^+⟨ϖ_α, β^∨⟩/⟨λ([ω_0]), β^∨⟩,
for every α∈Δ\ I. In particular, for every E∈Pic(X_P), we have a Hermitian structure h on E, such that the curvature F_∇ of the Chern connection ∇d + ∂log (h), satisfies
√(-1)/2πΛ_ω_0(F_∇) = ∑_β∈Φ_I^+⟨λ(E), β^∨⟩/⟨λ([ω_0]), β^∨⟩.
From this, we have that ∇ is a Hermitian-Yang-Mills (HYM) connection. Notice that
c_1(E) = ∑_α∈Δ\ I⟨λ(E), α^∨⟩ [Ω_α],
for every E∈Pic(X_P), i.e., the curvature of the HYM connection ∇ on E coincides with the G-invariant representative of c_1(E).
As a consequence of Proposition <ref>, in <cit.>, the author proved the following result.
Given a Kähler class [ω] ∈𝒦(X_P), then for every [ψ] ∈ H^1,1(X_P,R) we have
Θ̂: = Arg∫_X_P(ω + √(-1)ψ)^n/n! = ∑_β∈Φ_I^+arctan( ⟨λ([ψ]),β^∨⟩/⟨λ([ω]),β^∨⟩) (mod 2π),
such that λ([ψ]), λ([ω_0]) ∈Λ_P⊗R. In particular, fixed the unique G-invariant representative ω_0∈ [ω], there exists ϕ∈ C^∞(X_P), such that χ_ϕ:= ψ + √(-1)∂∂ϕ satisfies the deformed Hermitian Yang-Mills equation
Im ( ω_0 + √(-1)χ_ϕ)^n = tan(Θ̂) Re ( ω_0 + √(-1)χ_ϕ)^n,
In the above setting, given [ψ] ∈ H^1,1(X_P,R), by considering the G-invariant representative χ∈ [ψ], it follows that the Lagrangian phase of χ with respect to some G-invariant Kähler metric ω_0 is given by[In this paper we consider the principal value branch of arctan(x) given by (-π/2,π/2).]
Θ_ω_0(χ) = ∑_β∈Φ_I^+arctan( ⟨λ([χ]),β^∨⟩/⟨λ([ω_0]),β^∨⟩),
i.e., we have Θ̂ = Θ_ω_0(χ) (mod 2π). In summary, the solution of the dHYM equation in [ψ] ∈ H^1,1(X_P,R) is given by the unique G-invariant representative of the cohomology class [ψ]. In particular, if [ψ] = c_1(E), for some E∈Pic(X_P), combining this last fact with the ideas described in Remark <ref>, we have a Hermitian structure h on E, such that the curvature F_∇ of the associated Chern connection ∇ satisfies:
* √(-1)Λ_ω_0(F_∇) = cte;
* Im (e^-√(-1)Θ̂(ω_0 - 1/2πF_∇)^n ) = 0.
In conclusion, ∇ is a HYM instanton and a dHYM instanton.
§.§ The first Chern class of flag varieties
In this subsection, we shall review some basic facts related with the Ricci form of G-invariant Kähler metrics on flag varieties. Let X_P be a complex flag variety associated with some parabolic Lie subgroup P = P_I⊂ G^C. By considering the identification T_x_0^1,0X_P≅𝔪⊂𝔤^C, such that
𝔪 = ∑_α∈Φ_I^-𝔤_α,
we can realize T^1,0X_P as being a holomoprphic vector bundle, associated with the holomorphic P-principal bundle P ↪ G^C→ X_P, given by
T^1,0X_P = {(U_i)_i ∈ J, Ad∘ψ_i j U_i∩ U_j→GL(𝔪) },
where Ad P →GL(𝔪) is the isotropy representation. From this, we obtain
K_X_P^-1 = (T^1,0X_P) = {(U_i)_i ∈ J, (Ad∘ψ_i j) U_i∩ U_j→C^×}.
Since P= [P,P] T(Δ\ I)^C, considering ∘Ad∈Hom(T(Δ\ I)^C,C^×), we have
Ad(exp(t)) = e^tr(ad(t)|_𝔪) = e^- ⟨δ_P,t⟩,
∀t∈Lie(T(Δ\ I)^C), such that δ_P = ∑_α∈Φ_I^+α. Denoting ϑ_δ_P^-1 = ∘Ad, it follows that
ϑ_δ_P = ∏_α∈Δ\ Iϑ_ϖ_α^⟨δ_P,α^∨⟩⟹K_X_P^-1 = ⊗_α∈Δ\ I𝒪_α(ℓ_α),
such that ℓ_α = ⟨δ_P, α^∨⟩, ∀α∈Δ\ I. Notice that λ(K_X_P^-1) = δ_P, see Eq. (<ref>). If we consider the invariant Kähler metric ρ_0∈Ω^1,1(X_P)^G defined by
ρ_0 = ∑_α∈Δ\ I2 π⟨δ_P, α^∨⟩Ω_α,
it follows that
c_1(X_P) = [ ρ_0/2π].
By the uniqueness of G-invariant representative of c_1(X_P), we have
Ric(ρ_0) = ρ_0,
i.e., ρ_0∈Ω^1,1(X_P)^G defines a G-ivariant Kähler-Einstein metric on X_P (cf. <cit.>).
Given any G-invariant Kähler metric ω on X_P, we have Ric(ω) = ρ_0. Thus, it follows that the smooth function (ω)/(ρ_0) is constant. From this, we obtain
Vol(X_P,ω) = 1/n!∫_X_Pω^n = (ρ_0^-1∘ω)/n!∫_X_Pρ_0^n.
Since (ρ_0^-1∘ω) = 1/(2π)^n∏_β∈Φ_I^+⟨λ([ω]),β^∨⟩/⟨δ_P,β^∨⟩ and 1/n!∫_X_Pc_1(X_P)^n = ∏_β∈Φ_I^+⟨δ_P,β^∨⟩/⟨ϱ^+,β^∨⟩, we conclude that[cf. <cit.>.]
Vol(X_P,ω) = ∏_β∈Φ_I^+⟨λ([ω]),β^∨⟩/⟨ϱ^+,β^∨⟩,
where ϱ^+ = 1/2∑_α∈Φ^+α. Combining the above formula with the ideas introduced in Remark <ref> we obtain the following expression for the degree of a holomorphic vector bundle E→ X_P with respect to some G-invariant Kähler metric ω on X_P:
_ω(E) = ∫_X_Pc_1(E) ∧ [ω]^n-1 = (n-1)! [∑_β∈Φ_I^+⟨λ(E), β^∨⟩/⟨λ([ω]), β^∨⟩ ] [∏_β∈Φ_I^+⟨λ([ω]),β^∨⟩/⟨ϱ^+,β^∨⟩ ],
such that λ(E) ∈Λ_P, and λ([ω]) ∈Λ_P⊗R.
§ PROOF OF THEOREM A
In this section, we will prove the following theorem.
Fixed the unique SU(3)-invariant Kähler metric ω_0∈ c_1(P(T_P^2)), then there exist holomorphic vector bundles E_j→P(T_P^2), j=1,2,3, satisfying the following:
* (E_j) > 1, j = 1,2,3;
* E_1 admits a Hermitian structure h with associated Chern connection ∇ satisfying
√(-1)Λ_ω_0(F_∇) = c1_E_1,
Im (e^-√(-1)Θ̂(E_1) (ω_0⊗1_E_1 - 1/2πF_∇ )^3 ) = 0,
such that c = π/8μ_[ω_0](E_1) and Θ̂(E_1) = Arg∫_P(T_P^2)tr (ω_0⊗1_E_1 - 1/2πF_∇)^3 (mod 2π).
* E_2 admits a Hermitian structure h with associated Chern connection ∇ satisfying
√(-1)Λ_ω_0(F_∇) = c1_E_2,
Im (e^-√(-1)Θ̂(E_2) (ω_0⊗1_E_2 - 1/2πF_∇ )^3 ) ≠ 0,
for c = π/8μ_[ω_0](E_2) and for all Θ̂(E_2) ∈R.
* E_3 admits a Hermitian structure h with associated Chern connection ∇ satisfying
√(-1)Λ_ω_0(F_∇) ≠ c1_E_3,
Im (e^-√(-1)Θ̂(E_3) (ω_0⊗1_E_3 - 1/2πF_∇ )^3 ) = 0,
for all c ∈R, such that Θ̂(E_3) = = Arg∫_P(T_P^2)tr (ω_0⊗1_E_3 - 1/2πF_∇)^3 (mod 2π).
In particular, E_1 and E_2 are slope-polystable, and E_3 is slope-unstable.
The proof which we will present for the above result is constructive, which means that we will construct explicitly the examples of Hermitian connections ∇ on higher rank holomorphic vector bundles E→P(T_P^2) which illustrate the following cases:
* Type I: ∇ is both a HYM instanton and a dHYM instanton;
* Type II: ∇ is a HYM instanton but not a dHYM instanton;
* Type III: ∇ is a dHYM instanton but not a HYM instanton.
As we have seen (Remark <ref>), every line bundle E→P(T_P^2) admits a Hermitian connection which is Type I. However, explicit examples of Hermitian connections on higher rank holomorphic vector bundles of Type I, or which illustrate Type II and Type III cases, are not known in the literature. In fact, as far as the author knows, no concrete examples of dHYM instanton on higher rank holomorphic vector bundles are known so far. It is worth to point out that the first example which illustrate the existence of higher rank dHYM instanton was provided in <cit.>.
§.§ Line bundle case
Consider the complex simple Lie group G^C = SL_3(C). In this case, the structure of the associated Lie algebra 𝔰𝔩_3(C) can be completely determined by means of its Dynkin diagram
[labels=α_1,α_2,scale=3]Aoo
More precisely, fixed the Cartan subalgebra 𝔥⊂𝔰𝔩_3(C) of diagonal matrices, we have the associated simple root system given by Δ = {α_1,α_2}, such that
α_j(diag(d_1,d_2,d_3)) = d_j - d_j+1, j = 1,2.
∀diag(d_1,d_2,d_3) ∈𝔥. The set of positive roots in this case is given by
Φ^+ = {α_1, α_2, α_3 = α_1 + α_2}.
Considering the Cartan-Killing form[In this case, we have κ(X,Y) = 6tr(XY), ∀ X,Y ∈𝔰𝔩_3(C), see for instance <cit.>.] κ(X,Y) = tr(ad(X)ad(Y)), ∀ X,Y ∈𝔰𝔩_3(C), it follows that α_j = κ(·,h_α_j), j =1,2,3, such that[Notice that ⟨α_j,α_j⟩ = α_j(h_α_j) = 1/3, ∀ j = 1,2,3.]
h_α_1 =1/6(E_11 - E_22), h_α_2 =1/6(E_22 - E_33), h_α_3 =1/6(E_11 - E_33),
here we consider the matrices E_ij as being the elements of the standard basis of 𝔤𝔩_3(C). Moreover, we have the following relation between simple roots and fundametal weights:
[ α_1; α_2 ] = [ 2 -1; -1 2 ][ ϖ_α_1; ϖ_α_2 ], [ ϖ_α_1; ϖ_α_2 ] = 1/3[ 2 1; 1 2 ][ α_1; α_2 ],
here we consider the Cartan matrix C = (C_ij) of 𝔰𝔩_3(C) given by
C = [ 2 -1; -1 2 ], C_ij = 2⟨α_i, α_j⟩/⟨α_j, α_j⟩,
for more details on the above subject, see for instance <cit.>.
Fixed the standard Borel subgroup B ⊂SL_3(C), i.e.,
B = {[ ∗ ∗ ∗; 0 ∗ ∗; 0 0 ∗ ]∈SL_3(C)},
we consider the flag variety obtained from I = ∅, i.e., the homogeneous Fano threefold given by the Wallach space P(T_P^2) = SL_3(C)/B. In this particular case, we have the following:
(i) H^2(P(T_P^2),R) = H^1,1(P(T_P^2),R) = R[Ω_α_1] ⊕R[Ω_α_2];
(ii) Pic(P(T_P^2)) = {𝒪_α_1(s_1) ⊗𝒪_α_2(s_2) | s_1, s_2∈Z}.
Let ω_0 be the unique SU(3)-invariant Kähler metric on P(T_P^2), such that [ω_0] = c_1(P(T_P^2))[It is worth pointing out that there is nothing special with this choice. In fact, all the computations presented in this example work for an arbitrary choice of SU(3)-invariant (integral) Kähler class on P(T_P^2).]. Since λ(K_P(T_P^2)^-1) = δ_B = 2(ϖ_α_1 + ϖ_α_2), from Eq. (<ref>), it follows that
ω_0 = ⟨δ_B, α_1^∨⟩Ω_α_1 + ⟨δ_B, α_2^∨⟩Ω_α_2 = 2 (Ω_α_1 + Ω_α_2),
in particular, notice that λ([ω_0]) = δ_B = 2ϱ^+, thus
Vol(P(T_P^2),ω_0) = ∏_j = 1^3⟨δ_B,α_j^∨⟩/⟨ϱ^+,α_j^∨⟩ = 8,
see Eq. (<ref>). Given any [ψ] ∈ H^1,1(P(T_P^2),R), from Theorem <ref>, we have
Θ̂ = Arg∫_P(T_P^2)(ω_0 + √(-1)ψ)^3 = ∑_j = 1^3arctan( ⟨λ([ψ]),α_j^∨⟩/⟨δ_B,α_j^∨⟩) (mod 2π),
notice that, since I = ∅, it follows that Φ_I^+ = Φ^+ = {α_1, α_2, α_3 = α_1 + α_2}. Therefore, if we suppose that [ψ] = s_1[Ω_α_1] + s_2[Ω_α_2], for some s_1,s_2∈R, by considering the Cartan matrix C = (C_ij) of 𝔰𝔩_3(C) (see Eq. (<ref>)), we obtain the following:
* ⟨δ_B,α_1^∨⟩ = ⟨δ_B,α_2^∨⟩ = 2 and ⟨δ_B,α_3^∨⟩ = 4;
* ⟨λ([ψ]),α_1^∨⟩ = s_1, ⟨λ([ψ]),α_2^∨⟩ = s_2, ⟨λ([ψ]),α_3^∨⟩ = s_1 + s_2.
From above, we conclude that
Θ̂= arctan ( s_1/2) + arctan ( s_2/2) + arctan(s_1 + s_2/4) (mod 2π).
From Eq. (<ref>), given an arbitrary SU(3)-invariant (1,1)-form χ = s_1Ω_α_1 + s_2Ω_α_2, we have the following concrete expression for its Lagrangian phase w.r.t. ω_0:
Θ_ω_0(χ) = arctan ( s_1/2) + arctan ( s_2/2) + arctan(s_1 + s_2/4).
Also, as we have seen in Remark <ref>, if [χ] = c_1(E), for some E∈Pic(P(T_P^2)), then we have a Hermitian metric h on E for which that associated Chern connection ∇ satisfies
* √(-1)Λ_ω_0(F_∇) = cte (HYM);
* Im (e^-√(-1)Θ̂(ω_0 - 1/2πF_∇)^3 ) = 0 (dHYM).
Let us describe ∇ explicitly. From Proposition <ref>, we have
E = 𝒪_α_1(s_1) ⊗𝒪_α_2(s_2),
such that s_1,s_1∈Z. Given an open set U ⊂P(T_P^2) which trivializes both E→P(T_P^2) and B ↪SL_3(C) →P(T_P^2), and denoting by w the fiber coordinate in E|_U, we can construct a Hermitian structure h on E by gluing the local Hermitian structures
h_U = ww/||s_Uv_ϖ_α_1^+||^2s_1||s_Uv_ϖ_α_2^+||^2s_2,
where s_U U ⊂P(T_P^2)→SL_3(C) is some local section, here we consider ||·|| defined by some fixed SU(3)-invariant inner product on V(ϖ_α_k), k = 1,2. From this, we can describe the associated Chern connection ∇ (locally) by
∇|_U = d + A_U,
where
A_U = - ∂log ( ||s_Uv_ϖ_α_1^+||^2s_1||s_Uv_ϖ_α_2^+||^2s_2).
In particular, consider U = U^-(B) ⊂P(T_P^2), such that
U^-(B) = {[ 1 0 0; z_1 1 0; z_2 z_3 1 ]B | z_1,z_2,z_3∈C} (opposite big cell),
The open set above is dense and contractible, so it trivializes the desired bundles over P(T_P^2). By taking the local section s_U U^-(B) →SL_3(C), such that s_U(nB) = n, ∀ nB ∈ U^-(B), and considering
V(ϖ_α_1) = C^3 and V(ϖ_α_2) = ⋀^2(C^3),
where v_ϖ_α_1^+ = e_1, and v_ϖ_α_2^+ = e_1∧ e_2, fixed ||·|| defined by the standard SU(3)-invariant inner product on C^3 and ⋀^2(C^3), we obtain
A_U^-(B) = - ∂log [ ( 1 + ∑_i = 1^2|z_i|^2 )^s_1 (1 + |z_3|^2 + | [ z_1 1; z_2 z_3 ] |^2 )^s_2 ],
From above we obtain an explicit example of HYM instanton which is also a dHYM instanton on E = 𝒪_α_1(s_1) ⊗𝒪_α_2(s_2).
§.§ dHYM instantons and polystable holomorphc vector bundles
Since every line bundle L→P(T_P^2) admits a dHYM connection, a trivial way to construct higher rank dHYM instantons is taking Whitney sums of the form:
E := L⊕⋯⊕L_r-times,
In the above setting, the dHYM connection on L induces a Hermitian connection ∇ on E, such that
* ∇^0,1 = ∂;
* √(-1)Λ_ω_0(F_∇) = c 1_E (HYM);
* Im (e^-√(-1)Θ̂(E)(ω_0⊗1_E - 1/2πF_∇)^3 ) = 0 (dHYM).
The above fact is a consequence of the ideas presented in Remark <ref>. Notice that, in the above setting, we have
Θ̂(E) = Arg∫_P(T_P^2)tr (ω_0⊗1_E - 1/2πF_∇)^3 = Θ̂(L) (mod 2π),
where
Θ̂(L) = arctan ( s_1/2) + arctan ( s_2/2) + arctan(s_1 + s_2/4)_Θ_ω_0(χ_L) (mod 2π),
such that s_j = ⟨λ(L),α_j^∨⟩, j = 1,2, and [χ_L] = c_1(L), see Eq. (<ref>) and Eq. (<ref>).
From above, we have several examples of higher rank dHYM instantons. We shall refer to this class of examples as trivial dHYM instantons.
In order to construct examples of solutions to dHYM equation which are non-trivial we proceed in the following way. Given some real number m ∈R, we define the following subsets of the Picard group Pic(P(T_P^2)):
* 𝒟_m(ω_0):= {L∈Pic(P(T_P^2)) | Λ_ω_0(χ_L) = m},
* ℒ_m(ω_0):= {L∈Pic(P(T_P^2)) | Θ_ω_0(χ_L) = m},
such that χ_L∈ c_1(L) denotes the associated SU(3)-invariant representative. Notice that 𝒟_m(ω_0) and ℒ_m(ω_0) can be described, respectively, by the following concrete equations:
* Given L = 𝒪_α_1(s_1) ⊗𝒪_α_2(s_2) ∈𝒟_m(ω_0), then
Λ_ω_0(χ_L) = m 3/4(s_1 + s_2) = m;
* Given L = 𝒪_α_1(s_1) ⊗𝒪_α_2(s_2) ∈ℒ_m(ω_0), then
Θ_ω_0(χ_L) = m arctan ( s_1/2) + arctan ( s_2/2) + arctan(s_1 + s_2/4) = m.
From above, in particular, one can check that m = 0 ⇒𝒟_0(ω_0) = ℒ_0(ω_0). Moreover, since
_ω_0(L) = ∫_P(T_P^2)χ_L∧ω_0^2 = 1/3Λ_ω_0(χ_L)∫_P(T_P^2)ω_0^3 = 3!/3Λ_ω_0(χ_L) Vol(P(T_P^2),ω_0),
see Eq. (<ref>), we conclude that
𝒟_0(ω_0) = ℒ_0(ω_0) = {L∈Pic(P(T_P^2)) | _ω_0(L) = 0}_ = Pic_ω_0^0(P(T_P^2)).
Therefore, it follows that 𝒟_0(ω_0) = ℒ_0(ω_0) is the subgroup
Pic_ω_0^0(P(T_P^2)) ⊂Pic(P(T_P^2)).
From Eq. (<ref>), we have the following description in terms of generators
𝒟_0(ω_0) = ℒ_0(ω_0) = Pic_ω_0^0(P(T_P^2)) = ⟨𝒪_α_1(1) ⊗𝒪_α_2(-1)⟩.
Hence, taking distinct elements L_1,…,L_r∈Pic_ω_0^0(P(T_P^2)), one can define
E := L_1⊕⋯⊕L_r.
Denoting by χ_j, the SU(3)-invariant representative of c_1(L_j), ∀ j =1,…,r, we have an induced Hermitian structure h on E, such that the curvature of the associated Chern connection ∇ satisfies
√(-1)/2πF_∇ = [ χ_1 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ χ_r ].
By construction, we have
√(-1)Λ_ω_0(F_∇) = 0,
i.e., ∇ is an HYM connection, see Remark <ref>. In particular, notice that μ_[ω_0](E) = 0. Moreover, since
(ω_0⊗1_E - 1/2πF_∇ )^3 = [ (ω_0 + √(-1)χ_1)^3 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ (ω_0 + √(-1)χ_r)^3 ],
and Θ_ω_0(χ_j) = 0, ∀ j = 1,…,r, it follows from Theorem <ref> (see also Remark <ref>) that
Im ( e^-√(-1)Θ̂(E) (ω_0⊗1_E - 1/2πF_∇ )^3 ) = 0,
such that
Θ̂(E) = Arg∫_P(T_P^2)tr (ω_0⊗1_E - 1/2πF_∇)^3 = 0 (mod 2π)
Thus, we have that ∇ is a non-trivial example of dHYM connection. The class of examples presented above illustrates the higher rank examples of Hermitian connections of Type I. The above construction can be summarized in the following lemma.
There exists a Hermitian homlomorphic vector bundle (E,h) → (P(T_P^2),ω_0), with (E) > 1, such that the associated Chern connection ∇ satisfies
√(-1)Λ_ω_0(F_∇) = c1_E,
Im (e^-√(-1)Θ̂(E) (ω_0⊗1_E - 1/2πF_∇ )^3 ) = 0,
where c = π/8μ_[ω_0](E) and Θ̂(E) = Arg∫_P(T_P^2)tr (ω_0⊗1_E - 1/2πF_∇)^3 (mod 2π). In particular, we have that E is slope-polystable.
Proceeding as in Remark <ref>, we can describe ∇ obtained above in an explicit way. In fact, considering the opposite big cell U^-(B) ⊂P(T_P^2), and denoting
L_k = 𝒪_α_1(ℓ_k) ⊗𝒪_α_2(-ℓ_k),
such that ℓ_k∈Z, k = 1,…,r, we have the dHYM instanton given (locally) by
∇|_U^-(B) = d + [ A_U^-(B)^(1) ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ A_U^-(B)^(r) ],
where
A_U^-(B)^(k) = - ∂log [ ( 1 + ∑_i = 1^2|z_i|^2 )^ℓ_k/ (1 + |z_3|^2 + | [ z_1 1; z_2 z_3 ] |^2 )^ℓ_k ],
for all k = 1,…,r.
Let us now construct examples of Hermitian connections of Type II on holomorphic vector bundles of rank 2. Let m = 3/4, and consider 𝒟_3/4(ω_0). By construction, we can take F, G∈𝒟_3/4(ω_0), such that
F = 𝒪_α_1(2) ⊗𝒪_α_2(-1) and G = 𝒪_α_1(3) ⊗𝒪_α_2(-2).
From above, it follows that
Λ_ω_0(χ_F) = Λ_ω_0(χ_G) = 3/4 and Θ_ω_0(χ_F) - Θ_ω_0(χ_G)/2π∉Z.
In particular, we have
μ_[ω_0](F) = μ_[ω_0](G) = 12.
If we define E = F⊕G, it follows that there exits a Hermitian structure h on E, such that the curvature of the associated Chern connection ∇ is given by
√(-1)/2πF_∇ = [ χ_F 0; 0 χ_G ].
From the ideas introduced in Remark <ref>, the Hermitian connection ∇ on E = F⊕G mentioned above can described (locally) by
∇|_U^-(B) = d + [ A_F 0; 0 A_G ],
such that
* A_F = - ∂log [ ( 1 + ∑_i = 1^2|z_i|^2 )^2/ (1 + |z_3|^2 + | [ z_1 1; z_2 z_3 ] |^2 ) ],
* A_G= - ∂log [ ( 1 + ∑_i = 1^2|z_i|^2 )^3/ (1 + |z_3|^2 + | [ z_1 1; z_2 z_3 ] |^2 )^2 ].
By construction, from Theorem <ref> (see Remark <ref>), we have
* √(-1)Λ_ω_0(F_∇) = 3π/21_E = μ_[ω_0](E)π/81_E, notice that μ_[ω_0](E) = 12;
* Im (e^-√(-1)Θ̂(E)(ω_0⊗1_E - 1/2πF_∇)^3 ) ≠ 0, ∀Θ̂(E) ∈R.
Therefore, ∇ is an example of HYM instanton which is not a dHYM instanton, i.e., an example of Hermitian connection of Type II. In summary, we have the following.
There exists a Hermitian homlomorphic vector bundle (E,h) → (P(T_P^2),ω_0), with (E) > 1, such that the associated Chern connection ∇ satisfies
√(-1)Λ_ω_0(F_∇) = c1_E,
Im (e^-√(-1)Θ̂(E) (ω_0⊗1_E - 1/2πF_∇ )^3 ) ≠ 0,
for c = π/8μ_[ω_0](E) and for all Θ̂(E) ∈R. In particular, we have that E is slope-polystable.
Notice that 𝒟_3/4(ω_0) is the set of line bundles (up to isomorphism) of the form
L = 𝒪_α_1(s) ⊗𝒪_α_2(1-s), s ∈Z.
Therefore, given some integer r > 0, if we define
E:= ⊕_s = 2^r+1 (𝒪_α_1(s) ⊗𝒪_α_2(1-s) ),
by following a similar argument as in the case described in Eq. (<ref>), one can construct examples of higher rank HYM instantons of Type II.
§.§ dHYM instantons on unstable holomorphic vector bundles
In this subsection, we will construct some examples of dHYM connections which do not satisfy the HYM equation. In order to do so, we proceed in the following way. Let m_1,m_2,m_3∈R, such that:
* m_2≠ m_3;
* ℒ_m_1(ω_0) ≠∅ and 𝒟_m_i(ω_0) ≠∅, i = 2,3;
* ℒ_m_1(ω_0) ∩𝒟_m_i(ω_0) ≠∅, i =2,3.
Given m_1,m_2,m_3∈R satisfying the above conditions, let F,G∈ℒ_m_1(ω_0), such that
F∈𝒟_m_2(ω_0) and G∈𝒟_m_3(ω_0).
From above, we define
E = F⊕G.
By construction, we have that
_ω_0(F) = 3!/3m_2Vol(P(T_P^2),ω_0) and _ω_0(G) = 3!/3m_3Vol(P(T_P^2),ω_0),
Thus, it follows that
μ_[ω_0](F) = _ω_0(F) ≠_ω_0(G) = μ_[ω_0](G),
so E is not slope-semistable (e.g. <cit.>). Since a slope-polystable holomorphic vector bundle is in particular slope-semistable, from Kobayashi-Hitchin correspondence <cit.>,<cit.>, it follows that E does not admit a HYM connection. On the other hand, since F, G∈ℒ_m_1(ω_0), it follows that
Θ_ω_0(χ_F) = Θ_ω_0(χ_G) = m_1.
where χ_F∈ c_1(F) and χ_G∈ c_1(G) are the SU(3)-invariant representatives. Hence, it follows that there exists a Hermitian structure h on E, such that the curvature of the associated Chern connection ∇ is given by
√(-1)/2πF_∇ = [ χ_F 0; 0 χ_G ].
Since
tr (ω_0⊗1_E - 1/2πF_∇)^3 = (ω_0 + √(-1)χ_F )^3 + (ω_0 + √(-1)χ_G )^3,
it follows that
Θ̂(E) = Arg∫_P(T_P^2)tr (ω_0⊗1_E - 1/2πF_∇)^3 = m_1 (mod 2π).
Hence, from Theorem <ref> (see also Remark <ref>), we obtain
Im ( e^-√(-1)Θ̂(E) (ω_0⊗1_E - 1/2πF_∇ )^3 ) = 0.
In order to construct an explicit example which illustrates the above construction, consider m_1 = π. From this, we take two different integer solutions of the equation
arctan ( s_1/2) + arctan ( s_2/2) + arctan(s_1 + s_2/4) = π,
and consider the associated line bundles F,G∈ℒ_π(ω_0). It is worth pointing out that the solutions of Eq. (<ref>) satisfy
s_1s_2=12, s_1 > 0.
Thus, we can take, for instance,
F = 𝒪_α_1(2) ⊗𝒪_α_2(6) and G = 𝒪_α_1(3) ⊗𝒪_α_2(4).
From above, we have
Λ_ω_0(χ_F) = 6 and Λ_ω_0(χ_G) = 21/4.
Therefore, if we consider
m_2 = 6 and m_3 = 21/4,
it follows that m_1 = π, m_2 = 6, and m_3 = 21/4, satisfy the desired properties. In this case, we can define
E:= ( 𝒪_α_1(2) ⊗𝒪_α_2(6))_F⊕ ( 𝒪_α_1(3) ⊗𝒪_α_2(4))_G.
By following Remark <ref> and the previous ideas, we have that there exists a Hermitian structure h on E, such that the associated Chern connection ∇ is given by
∇|_U^-(B) = d + [ A_F 0; 0 A_G ],
such that
* A_F= - ∂log [ ( 1 + ∑_i = 1^2|z_i|^2 )^2 (1 + |z_3|^2 + | [ z_1 1; z_2 z_3 ] |^2 )^6 ],
* A_G= - ∂log [ ( 1 + ∑_i = 1^2|z_i|^2 )^3 (1 + |z_3|^2 + | [ z_1 1; z_2 z_3 ] |^2 )^4 ].
By construction, we have Θ_ω_0(χ_F) = Θ_ω_0(χ_G) =π, thus
Im ( e^-√(-1)Θ̂(E) (ω_0⊗1_E - 1/2πF_∇ )^3 ) = 0,
such that
Θ̂(E) = Arg∫_P(T_P^2)tr (ω_0⊗1_E - 1/2πF_∇)^3 = π (mod 2π).
Therefore, ∇ defines a dHYM instanton on E which is not an HYM instanton. In conclusion, we have the following result.
There exists a Hermitian homlomorphic vector bundle (E,h) → (P(T_P^2),ω_0), with (E) > 1, such that the associated Chern connection ∇ satisfies
√(-1)Λ_ω_0(F_∇) ≠ c1_E,
Im (e^-√(-1)Θ̂(E) (ω_0⊗1_E - 1/2πF_∇ )^3 ) = 0,
for all c ∈R, where Θ̂(E) = Arg∫_P(T_P^2)tr (ω_0⊗1_E - 1/2πF_∇)^3 (mod 2π). In particular, we have that E is slope-unstable.
It is worth mentioning that the subset 𝒟_π(ω_0) used in the previous construction is the finite subset of Pic(P(T_P^2)) of line bundles (up to isomorphism) of the form
L = 𝒪_α_1(s) ⊗𝒪_α_2(12/s), s ∈N, s | 12.
If we define
E:= ⊕_s ∈N, s | 12 (𝒪_α_1(s) ⊗𝒪_α_2(12/s) ) or ⊕_s ∈N, s | 12 (𝒪_α_1(12/s) ⊗𝒪_α_2(s) ),
one can check that E→P(T_P^2) is a rank 6 (slope unstable) holomorphic vector bundle. Proceeding similarly as in the last example, one can construct a dHYM instanton on E which is not dHYM instanton.
(Theorem <ref>) The proof follows from Lemma <ref>, Lemma <ref>, and Lemma <ref>.
§ PROOFS OF THEOREM B AND THEOREM C
Let X_P be a rational homogeneous variety and let ω_0∈Ω^2(X_P) be a G-invariant Kähler form. In this setting, we have the following.
Given E, F∈Pic(X_P), if
Im ( Z_[ω_0](E)/Z_[ω_0](F) ) = 0,
then E⊕F admits a dHYM instanton.
Given E∈Pic(X_P), it follows that
Z_[ω_0](E) = -(-√(-1))^n/n!∫_X_P([ω_0] + √(-1)c_1(E))^n.
Therefore, ∀E, F∈Pic(X_P), we have
Im ( Z_[ω_0](E)/Z_[ω_0](F) ) = Im (∫_X_P([ω_0] + √(-1)c_1(E))^n/∫_X_P([ω_0] + √(-1)c_1(F))^n ).
Considering the G-invariant representatives χ_E∈ c_1(E) and χ_F∈ c_1(F), it follows that
Im ( Z_[ω_0](E)/Z_[ω_0](F)) = 0 Arg ( ∫_X_P(ω_0 + √(-1)χ_E)^n )_Θ_ω_0(χ_E) - Arg ( ∫_X_P(ω_0 + √(-1)χ_E)^n )_Θ_ω_0(χ_F)∈ 2πZ,
see Remark <ref>. By taking the Hermitian structures h_E on E and h_F on F, such that the curvatures of the associated Chern connections ∇^E and ∇^F satisfy
√(-1)/2πF_∇^E = χ_E and √(-1)/2πF_∇^F = χ_F,
it follows from Eq. (<ref>) that
Im ( Z_[ω_0](E)/Z_[ω_0](F) ) = 0 Θ̂(E) = Θ̂(F) (mod 2π).
Considering the Hermitian structure h on E⊕F induced by h_E and h_F, it follows that the curvature of the associated Chern connection ∇ = ∇^E⊕∇^F is given by
√(-1)/2πF_∇ = [ χ_E 0; 0 χ_F ].
From above, we obtain
∫_X_Ptr (ω_0⊗1_E⊕F - 1/2πF_∇)^n= ∫_X_P (ω_0 + √(-1)χ_E)^n + ∫_X_P (ω_0 + √(-1)χ_F)^n.
Thus, we conclude that
Im ( Z_[ω_0](E)/Z_[ω_0](F) ) = 0 Θ̂(E⊕F) = Θ̂(E) (mod 2π)
Θ̂(E⊕F) = Θ̂(F) (mod 2π).
Since
e^-√(-1)Θ̂(E⊕F) (ω_0⊗1_E⊕F - 1/2πF_∇ )^n = e^-√(-1)Θ̂(E⊕F)[ (ω_0 + √(-1)χ_E)^n 0; 0 (ω_0 + √(-1)χ_F)^n ],
from Eq. (<ref>) and from Theorem <ref> (see Remark <ref>), we conclude that
Im ( Z_[ω_0](E)/Z_[ω_0](F) ) = 0 ⇒Im ( e^-√(-1)Θ̂(E⊕F) (ω_0⊗1_E⊕F - 1/2πF_∇ )^n) = 0,
i.e., ∇ = ∇^E⊕∇^F is a dHYM instanton on E⊕F.
As a consequence of the above result and the ideas introduced in Section <ref>, we have the following theorem.
Under the hypotheses of Theorem <ref>, for every integer r > 1, there exists a Hermitian holomorphic vector bundle (E,h) → (P(T_P^2),ω_0), such that (E) = r, and the following hold:
* E is slope-unstable and h^0(P(T_P^2),End(E)) > 1;
* considering φ(E) = Arg(Z_[ω_0](E) ) (mod 2π), we have
Im (e^-√(-1)φ(E)Z_ω_0(E,∇) ) = 0,
where ∇ is the Chern connection associated with h.
Given a Hermitian holomorphic vector bundle (E,h) → (P(T_P^2),ω_0), denoting by ∇ the associated Chern connection, we have that
ch(E,∇) = exp (√(-1)/2πF_∇ ) = ch_0(E,∇) + ch_1(E,∇) + ch_2(E,∇) + ch_3(E,∇),
such that
ch_k(E,∇) = 1/k! ( √(-1)/2πF_∇ )^k = 1/k! ( √(-1)/2πF_∇ ) ∧⋯∧ ( √(-1)/2πF_∇ )_k-times,
for all k = 0,1,2,3. From above, considering the End(E)-valued (3,3)-form
Z_ω_0(E,∇) := -∑_j=0^3(-√(-1))^j/j!ω_0^j∧ch_3-j(E,∇),
it follows that
Z_ω_0(E,∇) = -∑_j=0^3(-√(-1))^j/j! ( 1/(3-j)!ω_0^j∧ ( √(-1)/2πF_∇ )^3-j ).
Since 1/j!(3-j)! = 1/3!3j, ∀ j = 0,…,3, we obtain
Z_ω_0(E,∇) = -∑_j=0^3(-√(-1))^j/3! ( 3jω_0^j∧ ( √(-1)/2πF_∇ )^3-j ).
Now we observe that
( ω⊗1_E - 1/2πF_∇ )^3 = ∑_j = 0^3(-1)^3-j3jω_0^j∧ ( 1/2πF_∇ )^3-j.
Therefore, replacing (-1)^j = (-1)^3(-1)^3-j, j = 0,…,3, in Eq. (<ref>), it follows from Eq. (<ref>) that
Z_ω_0(E,∇) = -(-√(-1))^3/3! ( ω⊗1_E - 1/2πF_∇ )^3.
In particular, if E = L_1⊕⋯⊕L_r, such that L_ℓ∈Pic(P(T_P^2)), for every ℓ = 1,…,r, by taking the SU(3)-invariant representative χ_L_ℓ∈ c_1(L_ℓ), ∀ℓ = 1,…,r, and choosing Hermitian structures h_ℓ, such that √(-1)/2πF_∇^(ℓ) = χ_L_ℓ, where ∇^(ℓ) is the associated Chern connection of h_ℓ, for all ℓ = 1,…,r, we have an induced Hermitian structure h on E, such that ∇ = ∇^(1)⊕⋯⊕∇^(r) is the associated Chern connection of h. Thus, we obtain
Z_ω_0(E,∇) = - (-√(-1))^3/3![ (ω_0 + √(-1)χ_L_1)^3 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ (ω_0 + √(-1)χ_L_r)^3 ].
On the other hand, we have
Z_[ω_0](E) = -∫_P(T_P^2)e^-√(-1)[ω_0]ch(E) = -∑_ℓ = 1^r∫_P(T_P^2)e^-√(-1)[ω_0]ch(L_ℓ).
Since
-∫_P(T_P^2)e^-√(-1)[ω_0]ch(L_ℓ)_Z_[ω_0](L_ℓ) = -(-√(-1))^3/3!∫_X_P([ω_0] + √(-1)c_1(L_ℓ))^3,
if we suppose that
Im ( Z_[ω_0](L_ℓ)/Z_[ω_0](L_ℓ+1) ) = 0 Θ̂(L_ℓ) = Θ̂(L_ℓ+1) (mod 2π).
for all ℓ = 1,…,r-1, it follows that
φ(E) = Arg ( Z_[ω_0](E) ) = [Θ̂(L_ℓ) + 3π/2] (mod 2π),
for every ℓ = 1,…,r, notice that -(-√(-1))^3 = e^3π/2√(-1). Since Θ̂(L_ℓ) = Θ_ω_0(χ_L_ℓ) (mod 2π), for all ℓ = 1,…,r, see for instance Eq. (<ref>), it follows from Eq. (<ref>) that
Im (e^-√(-1)φ(E)Z_ω_0(E,∇) ) = 0 Im (e^-√(-1)Θ_ω_0(χ_L_ℓ)(ω_0 + √(-1)χ_L_ℓ)^3 ) = 0,
∀ℓ =1,…,r. From above, in order to conclude the proof, one can consider, for instance, L_1, …,L_r∈Pic(P(T_P^2)), such that
L_1 = 𝒪_α_1(2) ⊗𝒪_α_2(6) and L_ℓ = 𝒪_α_1(3) ⊗𝒪_α_2(4), ∀ℓ = 2, …,r.
Defining E := L_1⊕⋯⊕L_r, we notice that
(A) μ_[ω_0](L_1) ≠μ_[ω_0](L_ℓ), ∀ℓ = 2,…,r,
(B) Θ̂(L_ℓ) = π (mod 2π), ∀ℓ = 1,…,r,
(C)End(E) = End(L_1) ⊕⋯⊕End(L_r),
for item (A) and item (B), see Section <ref>. From item (A) we have that E is slope-unstable, from item (B) and the construction presented above we have that there exists a Hermitian structure h on E, such that the associated Chern connection ∇ is a solution of the equation
Im (e^-√(-1)φ(E)Z_ω_0(E,∇) ) = 0.
From item (C), it follows that
h^0(P(T_P^2),End(E)) = ( H^0(P(T_P^2),End(E)) ) > 1,
which concludes the proof.
alpha
|
http://arxiv.org/abs/2307.04897v1 | 20230710204356 | Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe | [
"Tom Struck",
"Mats Volmer",
"Lino Visser",
"Tobias Offermann",
"Ran Xue",
"Jhih-Sian Tu",
"Stefan Trellenkamp",
"Łukasz Cywiński",
"Hendrik Bluhm",
"Lars R. Schreiber"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
JARA-FIT Institute for Quantum Information, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
ARQUE Systems GmbH, 52074 Aachen, Germany
JARA-FIT Institute for Quantum Information, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
Helmholtz Nano Facility (HNF), Forschungszentrum Jülich, Jülich, Germany
Institute of Physics, Polish Academy of Sciences, Warsaw, Poland
[email protected]
JARA-FIT Institute for Quantum Information, Forschungszentrum Jülich GmbH and RWTH Aachen University, Aachen, Germany
ARQUE Systems GmbH, 52074 Aachen, Germany
Long-ranged coherent qubit coupling is a missing function block for scaling up spin qubit based quantum computing solutions. Spin-coherent conveyor-mode electron-shuttling could enable spin quantum-chips with scalable and sparse qubit-architecture. Its key feature is the operation by only few easily tuneable input terminals and compatibility with industrial gate-fabrication. Single electron shuttling in conveyor-mode in a 420 nm long quantum bus has been demonstrated previously. Here we investigate the spin coherence during conveyor-mode shuttling by separation and rejoining an Einstein-Podolsky-Rosen (EPR) spin-pair. Compared to previous work we boost the shuttle velocity by a factor of 10000. We observe a rising spin-qubit dephasing time with the longer shuttle distances due to motional narrowing and estimate the spin-shuttle infidelity due to dephasing to be 0.7 % for a total shuttle distance of nominal 560 nm. Shuttling several loops up to an accumulated distance of 3.36, spin-entanglement of the EPR pair is still detectable, giving good perspective for our approach of a shuttle-based scalable quantum computing architecture in silicon.
Spin-EPR-pair separation by conveyor-mode single electron shuttling in Si/SiGe
Lars R. Schreiber
August 12, 2023
==============================================================================
Silicon-based electron-spin qubits show single- and two-qubit gate, <cit.> as well as readout <cit.> fidelities reaching the prerequisite for topological quantum error correction <cit.>. This pronounces the need of increasing the number of spin-qubits on a chip in an architecture which does preserve the qubit's manipulation and readout performance. New qubit readout strategies <cit.> and ideas for architectures with sparse <cit.> and dense <cit.> qubit-grids have emerged. Sparse qubit grids have good perspective to eliminate qubit cross-talk issues of their dense counter-part <cit.> and to solve the signal-fanout problem <cit.> by employing tiles of on-chip control-electronics <cit.>. Sparse qubit architectures require high-fidelity coherent spin couplers that can bridge distances of several micrometers. One type of coupler involves high-impedance superconducting resonators, which necessitate a complex interface between spin and the electrical-dipole <cit.>. Other demonstrations focus on spin-qubit shuttling of one spin-qubit towards another qubit across an array of tunnel-coupled static quantum dots (QDs) named bucket-brigade shuttling <cit.>. This approach, however, is complicated by the sensitivity of adiabatic Landau-Zener transitions to potential disorder in the quantum well <cit.>.
In this respect, spin shuttling using a moving QD—referred to as conveyor-mode shuttling—is more scalable, as it requires only four easily tunable input signals, independent of its length <cit.>. While coherent spin shuttling preserving entanglement has been demonstrated with surface acoustic waves in piezoelectric materials <cit.>, an array of top-gates connected to four gate sets can induce a moving QD in a Si/SiGe one-dimensional electron channel (1DEC) <cit.>. A spin qubit shuttle device (SQS), also called QuBus, employing the conveyor-mode shuttling in Si/SiGe has been demonstrated, with a shuttle distance of 420 and a charge shuttling fidelity of (99.42 ± 0.02) % <cit.>. Subsequent improvements pushed the cumulative shuttle distance to 19 with a charge shuttling fidelity of (99.7 ± 0.3) % <cit.>.
Here, we go one step further and characterise the spin-coherence of a SQS operated in conveyor-mode. To probe the spin-coherence, we initialize the SQS by creating a spin-entangled Einstein-Podolsky-Rosen (EPR)-pair at one end, separate the EPR-pair by conveyor-mode shuttling at a variable distance and velocity and combine them to detect the preservation of the spin-entanglement by Pauli-spin blockade (PSB). Compared to previous work <cit.>, we increased the shuttle velocity by four orders of magnitude to 2.8 while preserving the charge shuttle fidelity at (99.72 ± 0.01) % over a distance of nominal 560 nm in total.
By observing coherent oscillations from singlet (S) to unpolarised triplet (T_0) during the shuttle process, we demonstrate the coherence of the shuttled spin-qubit up to a cumulative distances of nominal 3.36. The dephasing time T_2^* of the EPR-pair is initially on par with ST_0 dephasing in a tunnel-coupled double quantum-dot (DQD) in Si/SiGe with a natural abundance of isotopes <cit.>. We observe an increase of T_2^* with the shuttle distance, which demonstrates the predicted enhancement of the dephasing time of the shuttled qubit by motional narrowing <cit.>.
§.§ Device Layout and Method
First, we introduce the SQS device and the experimental methods. The three metallic (Ti/Pt) gate-layers of the SQS device (Fig. <ref>a) are isolated by conformally deposited 7.7 nm thick Al_2O_3 and fabricated by electron-beam lithography and metal-lift off on an undoped Si/Si_0.7Ge_0.3 quantum well with natural abundance of isotopes similar to Ref. <cit.>. The 1DEC of the SQS is formed in the Si/SiGe quantum well by an approximately 1.2 micron long split-gate with 200 nm gate spacing (purple in Fig. <ref>a). Seventeen so called clavier gates are fabricated on top with 70 nm gate pitch. Eight gates are fabricated on the second gate layer labelled P1, P8, 3×S1 and 3×S3. Nine gates are on the third layer labelled B1, B2, B8, B9, 3×S2 and 2×S4. Characteristic for our SQS in conveyor mode, the shuttle gates S1, S2, S3, S4 each represent one of the four gate-sets containing two to three clavier gates. Clavier gates of one gate set are electrically connected and thus always on the same electrical potential <cit.>. Since every fourth clavier gate is on the same potential within the shuttle section, the period λ of the electrostatic potential is 280 nm. The SQS contains two single electron transistors (SETs) at both ends which are used as electron reservoir and proximate charge sensors sensitive to the electron filling at the ends of the SQS. Due to a broken clavier gate B8 on the right side of the device, only the left side of the SQS is used.
§.§ Pulse Sequence
Fig. <ref>b shows the simplified sequence for a shuttling experiment (details in the method section). It starts with loading four electrons from the left tunnel-coupled SET into the SQS (red and blue triangle stages in Fig. <ref>a,c). Then, we decouple this electron reservoir by raising B1, such that the four electrons are trapped in the first QD confined by gates B1 and B2. Next, we form a DQD under P1 and S1 with B2 controlling the inter-dot tunnel coupling. We initialise the electron system to a spin-singlet state by waiting in (n,m)=(4,0) (stage I) for approximately 1, where n and m are the electron filling numbers of the left and right QD, respectively. Then, we adiabatically pulse to the (3,1) charge state (stages I → S) and close the DQD's tunnel barrier via B2 (S → T in Fig. <ref>b,c).The electron in the right QD forms a spin-singlet with the remaining three electrons. We load four electrons into our system to enhance the energy splitting between singlet and triplet states and thus increase the PSB region in gate space (Fig. <ref>c) <cit.>. The analogy to the two-spin EPR-pair is reasonable, since the simple picture holds that two of the three electrons fill one valley-orbit shell and the remaining electron is in a singlet state with the electron in the right QD <cit.>.
Afterwards we initiate the electron shuttling process by applying sinusoidal voltage pulses on the shuttle gates S1-S4 (see details in the method section and in Fig. <ref>). During shuttling, the three electrons remain confined in the outermost left QD and only the separated electron is shuttled in a moving QD. After shuttling forward and backward by the same distance (Fig. <ref>b), we increase the tunnel coupling within the DQD again and tune the DQD into PSB (stages T → S → P in Figs. <ref>b and c). In this way, only the EPR pair in singlet state can tunnel into (4,0) charge state. For all three triplet states this charge transition is energetically forbidden. Finally, we close the barrier once more to freeze the charge state <cit.> (stage F in Figs. <ref>b and c) and read it out by the current I_SET.
§.§ Coherent Shuttling
In this section, we demonstrate coherent shuttling by measuring ST_0 oscillations as a function of shuttle velocity v_S, distance d and two values of the global magnetic fields B (Fig. <ref>a and b). For each measurement of the singlet probability P_S, 50000 shuttle cycles are evaluated. Note that the QD shuttles a distance d always twice, forward and backward. We apply a simple sinusoidal signal of frequency f to the gates S1, S2, S3 and S4 (see method section Charge Shuttling for the details), thus the shuttle velocity should be approximately constant and the electron shall be in motion throughout the entire shuttle process, from initialisation to readout. The total shuttle time τ_S is adjusted by varying the shuttle velocity v_S=f λ. The maximum velocity is v_max=2.8 and the amplitude of the sinusoidal signals is chosen to be in the regime of of large charge shuttling fidelity ℱ_C=(99.72 ± 0.01) % across a shuttle distance d=λ (see methods section Charge Shuttling). We managed to extend this distance to d=1.2 λ=336 nm finally limited by a drastic drop in electron return probability. The upper bound of v_S does not allow to access data points in the grey triangular areas (labelled with τ_S) of Fig. <ref>a,b at small τ_S and large d.
We fit each line of measured ST_0 oscillations for both B (Fig. <ref>c,d) with
P_S(τ) = e^-(τ/T_2^*)^2( a_<cos(2πν_<τ_S+φ_<)
+a_>cos(2πν_>τ_S+φ_>))+c,
where P_S is the probability of detecting the EPR pair in a singlet state, T_2^* is the ensemble dephasing time of the EPR pair, a_<,>,
ν_<,>, φ_<,> and c are the visibility, frequency, phase and offset of the ST_0-oscillations, respectively. Variations in the offset c may arise from singlet initialisation and detection errors and randomly fluctuates among scan-lines. We empirically find that the data can be best fitted by two oscillations, hence the two cosine terms with their respective frequencies and phases are used. We speculate that this might result from initialising a mixed valley state, which requires further investigation elsewhere. Our fits (Fig. <ref>c,d) match the measured raw-data in Fig. <ref>a,b well.
First, we discuss the fitted ν_<,>. The origin of the measured ST_0-oscillations is the Zeeman energy difference between the spin in the shuttled QD and the spin in the static QD, which is filled by three electrons. The difference originates from slightly different electron g-factors Δ g and Overhauser-energies Δ E_hf due to hyperfine contact interaction <cit.>.
This is the same mechanism that leads to ST_0 oscillations in the case of a DQD without any conveyor-mode shuttling. These oscillations, which are effectively at d=0 nm, are discussed in the method section about Singlet-Triplet oscillations.
The dynamics of the nuclear spins is slow compared to a shuttle pulse sequence, but the Overhauser field might vary along the 1DEC. The electron g-factor depends on valley state and QD confinement and might vary for the moving QD along the 1DEC as well <cit.>. Hence, the Zeeman energy difference of the entangled spins and thus the ST_0 oscillation ν_i frequency depends on the position x of the moving QD. As this position is changing during the shuttle process, the frequency ν_i(d) becomes a function of shuttle distance d and it is given by an average over the shuttling distance d:
ν_i(d)= 1/h d∫_0^d dx [Δ g(x) μ_B B + Δ E_hf(x) ],
where h is the Planck constant, and μ_B is the Bohr magneton. We idealize by neglecting the time-dependence of Δ E_hf and Δ g and by assuming a deterministic thus reproducible trajectory x(t) of the shuttled QD, when averaging over several shuttling cycles. Due to the integral, we expect that changes in ν_i(d) smooth out for increasing d. Indeed, we observe a shuttle-distance dependence of the ST_0 oscillation with a smoothing trend towards larger d (Fig. <ref>e). Furthermore, we observe that the ν_<,> scale with the external magnetic field, which underlines the origin of our observed oscillations being spin-dynamics in agreement with Eq. <ref>. Calculating pairwise the ratios of ν_< and ν_> measured at B=0.6 T and B=0.8 T, we arrive close to the expected ratio of 0.75 (Fig. <ref>f). This demonstrates the linearity in magnetic field strength and indicates that the contribution of Δ E_hf(x) is small compared to the contribution of the electron g-factor difference. Furthermore, it shows the two oscillation components have distinct, but reproducible Δ g(x). For small d, the difference of φ_<,> is small increasing the fitting error, but deviations from the ratio 0.75 cannot be fully excluded here.
§.§ Spin-dephasing during shuttling
Most important is the evaluation of the ensemble spin dephasing time T_2^* of the EPR-pair as a function of d, since it contains information on the impact of conveyor-mode shuttling on the spin dephasing. We observe that T_2^* increases with larger shuttle distance (Fig. <ref>g). Since qubit shuttling opens up new dephasing mechanisms <cit.>, this result might be surprising at first sight, but is expected due to a motional narrowing enhancement of the shuttled qubit dephasing time <cit.>. We quantify the phenomenon by the fit f_1(d) in Fig. <ref>g using
( 1/T_2^*)^2 = ( 1/T_2,L^*)^2 +
( 1/T_2,R^*)^2 l_c/d+l_c.
To incorporate the dependence of Gaussian decay T_2^* of the EPR-pair on shuttle distance d, we use the quadratic addition of inverse T_2^* times for the left (L) and right (R) electron spin and include a factor for motional narrowing for the shuttled qubit, where T_2,L^* is the ensemble spin dephasing time of the electron-spin that remains static in the outermost left QD, and T_2,S^*(d)≡ T_2,R^*√(d+l_c/l_c) represents the ensemble spin dephasing time of the forward and backward shuttled electron spin (total distance 2d), averaging over a d long spatial range of quasistatic-noise of its Zeeman-energy E_z(x(t)) having a correlation length l_c <cit.>. Note that we distinguish between the static ensemble dephasing times T_2,L^* and T_2,R^*, since we expect the confinement strength within the static QD to be less than in the moving QD. Our fit to the ensemble dephasing time of the EPR pair (Fig. <ref>g) results in T_2,L^*=(1110±90) ns, T_2,R^*=(520±20) ns and l_c=(13±3) nm. This in total yields T_2,S^*(280) = (2460±310).
This result implies that shuttled qubit increases its dephasing time by a factor of ≈ 4 when shuttled twice across a distance of nominal 280 nm due to motional narrowing. Note that the data points in (Fig. <ref>g) tend to be lower than the fit for the largest d, which might be due to dephasing mechanisms induced by the shuttle process such as motion-induced valley excitations <cit.>. At very short shuttle distance d, a deformation of the moving QD might add to the change in spin dephasing time. Assuming a constant shuttle velocity, constant shape of the moving QD and only motional narrowing of E_hf(x), we derive the fit function f_2(d) exhibiting a modified motional narrowing factor (Fig. <ref>g). Remarkably, we arrive at very similar fitting parameters (see supplementary material).
§.§ Long distance shuttling
In order to increase the distance of shuttling, we always shuttle at a maximum velocity v_max and once the shuttled electron returns to the right QD of the DQD (stage S), we recorded the ST_0 oscillations by waiting between additional 0 to 1 μs prior to measure the EPR spin-state. We plot the spin-singlet probability P_S of the EPR-pair as a function of the total time τ_S,DQD of shuttling and waiting (Fig. <ref>h). Due to the limited length of the shuttle zone, we increase the cumulative distance by shuttling in- and out for one period λ multiple times. The total number of periods (D) shuttled forward plus backward is indicated on the left as the accumulated shuttle distance. For example for the trace labelled D=2, the voltage pulses applied to S1-S4 are designed to shuttle the electron one period λ=280 nm forward and same distance back towards the spin-detector. For D=1, the electron is shuttled half a period forward, and the same distance back towards the detector. Strikingly, we still observe ST_0 oscillations for the trace labelled D=12, for which the electron shuttles alternating six times forward and backward by λ being nominally equivalent to an accumulated distance of 3.36. The appearance of ST_0 oscillations show that the EPR-pair remained entangled after such long shuttling distance.
§.§ Mapping local ν variations
Coherent shuttling of a spin qubit and EPR separation allows us to collect information about Δ g(x) along the SQS. Instead of shuttling the spin-qubit forward and backward with a τ_S-dependent v_S, we shuttle it by a distance x along the 1DEC at maximum v_max=2.8, wait there for a time τ_W to let the ST_0 oscillations evolve and then shuttle back at maximum v_S for PSB detection. We observe (Fig. <ref>) ST_0-oscillations and similar to Fig. <ref>a and b, their frequency ν(x,B) scales with the B-field as expected (cmp. Fig. <ref>a and b). Compared to Fig. <ref>a and b, ν(x,B) tend to fluctuate faster as a function of x. This is expected, since ν_<,> results from averaging many positions x(t) in the coherent shuttle experiment (Eq. <ref>) in Fig. <ref>, while here ν dominantly depends on the fixed position x. Note that x(t) and thus d is not measured in any case, but deduced from the expected position of the ideal propagating wave potential x=λΔφ/2 π, where φ is the phase of the voltages applied to gates S1-S4 relative to the initialisation potential. Hence, we neglect potential disorder and wobbling effects of the propagating wave potential, which are exemplary simulated in Ref. <cit.>. Notably, ν(x) starts to become nearly constant at x>210. This could be an indication that the electron stops moving at this point. If we try to shuttle to x>330>λ, the electron dominantly does not return, indicating potential disorder which is sufficiently high to break the QD confinement in the propagating QD.
§.§ Conclusion
This work shows progress on electron shuttling in conveyor-mode, building up on earlier demonstrations of charge shuttling <cit.>. We improved the shuttle velocity by four orders of magnitude to a regime at which coherent shuttling becomes feasible <cit.>. When moving into and out of the device once, we demonstrate coherent shuttling by EPR pair separation and recombination across a total distance of nominal 560 nm and at least 420 nm in case the electron spins halts at x=210 nm. Furthermore, we detect entanglement when moving the electron for an accumulated shuttle distance of nominal 3.36 (at least 2.4). Remarkably, the dephasing time of the shuttled qubit T_2,S^* is enhanced by motional narrowing, while the static electron-spin dominates the dephasing of the spin-entangled EPR-pair. Based on the fitted T_2,S^*(280) ≈ 2460 (≈ 2130 for fitting with f_2(d)), we can estimate a phase-infidelity caused by the shuttle time τ_S at maximum shuttle velocity v_S using the Gaussian decay
1-ℱ = 1 - exp(-(τ_S/T_2,S^*)^2 ) ≈(2d/T_2,S^* v_S)^2.
We estimate a shuttling-induced phase-infidelity of 1-ℱ=(0.66 ± 0.17)% for a total shuttle distance of nominal 2d=2λ=560 nm (at least 420 nm). Assuming a constant shuttle velocity, constant shape of the moving QD and only motional narrowing of E_hf(x) (fit equation f_2(d) see supplementary material) yields a matching infidelity of 1-ℱ=(0.88 ± 0.18)% within the error range.
Next, we have to increase the shuttle distance by improving confinement of the moving QD. We already achieved a charge shuttle fidelity of (99.7 ± 0.3) % for total shuttle distance of 20 μm in a 10 μm long Si/SiGe QuBus <cit.>. The spin dephasing time can be enhanced by isotopically purified ^28Si. Adding spin-manipulation zones will grant more flexibility in performing coherent shuttling experiments to explore the dephasing channels and the role of the valley states. In the long run, we target at the integration of our spin shuttle device into a scalable semiconductor qubit architecture <cit.>.
§ METHODS
§.§ Charge Shuttling
A prerequisite for spin-coherent shuttling is that the electron stays confined in the moving QD, which we call charge shuttling. Fig. <ref> depicts the pulse procedure for benchmarking the charge shuttling in the same device that we used for spin-coherent shuttling. Firstly, we load four electrons into the first QD by lowering B1 (Fig. <ref>a inset). Due to cross-talk, we need to compensate on P1 and B2. Thereafter, the barrier is raised again to isolate the system. Loading takes approximately 2 time as the voltage on B1 is 10 kHz lowpass-filtered. Subsequently, one electron is moved into the second QD (Fig. <ref>a, red triangle → S) and the barrier B2 is closed by pulsing it down by 120 (S → T). After stage T, the shuttle pulse (Fig. <ref>b lower part) is applied to the gate-sets S1-S4
V_Si(τ_S)=U_i·sin(2 π f τ_S+φ_i)+C_i.
The amplitudes (U_1, U_3) applied to the gate-sets S1 and S3 on the second layer (blue in Fig. <ref>a) is U_lower=150, whereas the amplitudes (U_1, U_3) applied to the gate-sets S2 and S4 on the 3rd metal layer is slightly higher (U_upper=1.28· U_lower=192) to compensate for the difference of capacitive coupling of these layers to the quantum well. This compensation extends to the DC-part of the shuttle gate voltages. The offsets C_1=C_3= 0.7 are chosen to form a smooth DQD, whilst C_2= C_4= 0.896 are chosen to form a smooth DC potential. The phases are chosen in order to build a travelling wave potential across the one-dimensional electron channel (φ_1=-π/2, φ_2=0, φ_3=π/2, φ_4=π). This travelling wave potential is illustrated in Fig. <ref>b at the top part. The barrier B2 is pinched off to limit the cross-talk-influence from the shuttle pulse to the static electrons. The electron is moved adiabatically by one period of the travelling wave potential (280) to the right. After one period, the absolute gate voltages are exactly identical to the prior state, when the charge scan in Fig. <ref>a has been recorded. Hence, we can check whether the electron is shuttled away by going back to the electrostatic configuration corresponding to the red triangle and measuring the SET current. By time reversing the voltage pulses on S1-S4, we shuttle the electron back and perform a measurement in a similar manner. Then, we calculate a histogram as shown in the inset of Fig. <ref>, fit two Gaussian distributions and take the fits crossing point to define the range of I_SET assigned to three and four electron detection events. Only if the first measurement yields three (i.e. electron is shuttled away from detector) and the second measurement four electrons (i.e. electron is shuttled back to detector), a shuttling event is counted as successful. The same approach for counting successful charge shuttle events has been used in Ref. <cit.>.
In Fig. <ref>d, we plot the charge shuttling fidelity ℱ_C as a function of the lower layer amplitude U_lower, the upper layer amplitude is U_upper=1.28· U_lower to compensate for the larger distance to the 1DEC. We find a steep rise of ℱ_C at U_lower>110. The histogram of I_SET for all U_lower>125 mV (inset of Fig. <ref>d) shows well separated Gaussians assigned to either four or three electron filling of the QD underneath gate P1. Due to nonlinear effects on the SET, the peak for four electrons is narrower than the peak for three electrons. Fig. <ref>e shows charge shuttling fidelities as a function of shuttle frequency f as defined in Eq. <ref> which corresponds to a shuttle velocity v_S=fλ. From the green points we read off high fidelities up to 10 (2.8/). By averaging ℱ_C(U_lower>125) in Fig. <ref>d, we calculate the mean charge shuttling fidelity for shuttling a nominal total distance of 2λ =560 nm (λ forwards and backwards) to be ℱ_C=(99.72 ± 0.01) %. This value is is slightly better than the charge shuttling fidelity of 99.42 % obtained in Ref. <cit.>. Moreover, we found charge shuttling across 2λ and back at f=2. We tracked the charge by measuring the charge state after every shuttle-pulse, which moves the electron by one period λ (cmp. shuttle tomography method in Ref. <cit.>) and calculated a transfer fidelity of 98.7 % at the same voltage amplitudes.
§.§ Singlet-Triplet Oscillations
To demonstrate that the single-electron spin-qubit coherently shuttles, we use the preservation of the entanglement with the static electron spin, which we detect by the coherent oscillations between spin-singlet S and unpolarised spin-triplet T_0 state of this EPR pair
H=([ -J(ε) Δ g μ_B B + Δ E_hf/2; Δ g μ_B B + Δ E_hf/2 0 ])
in the (|S⟩,|T_0⟩)-basis.
Here, J(ε) represents the exchange interaction as a function of the detuning ε(=V_P1) between the left and right QD. Δ g is the g-factor difference between the two QDs <cit.>. Δ E_hf is the Overhauser-energy-difference between the two dots.
After loading four electrons as shown in Fig. <ref>a, we initialise the system to S(4,0) by waiting at stage I (Fig. <ref>a) for 2. Next, we step V_P1 by 20 which reduces J(ε) and turns on Δ g μ_B B by letting one electron adiabatically tunnel into the right QD.
As the two electrons are laterally separated, they are subject to different electron g-factors resulting in different Zeeman-energies as a result of the global B-field of 0.8. At stage S, we wait for τ_DQD time and pulse to the PSB in stage P where spin information is converted to charge information. The conversion takes approximately 500 after which a raise of the inter-dot barrier freezes the charge state for readout (stage F). Iterating over this pulse scheme, we record the singlet return probability P_S (Fig. <ref>b), which is fitted by
P_S(dt)=a · e^-(τ_DQD/T_2^*)^2cos(2πν dt+φ) + c .
We yield for the spin dephasing time of the entangled spin-state T_2^*=(565±10) and for the frequency ν=(7.29±0.01). Fig. <ref>c summarises the pulse in a schematic way. For coherent shuttling experiments, instead of waiting at the separation stage the sequence presented in Fig. <ref>d is inserted between the separation and PSB-freeze-RO pulse segments shown in Fig. <ref>c.
§.§ Experimental Setup
All experiments are conducted in a dilution refrigerator with a base temperature of 40. All DC
lines to the device are filtered by pi-filters (f_c=5MHz) at room temperature and by 2nd order RC filters with f_c=10 kHz at base temperature. The clavier gates, B2, P1, P8 and B8 are connected to resistive bias-tees with a cutoff frequency of 5Hz.
Signals are applied to the AC and DC input terminal of the bias-tee, in order to allow inclusion of millisecond long pulse segments. A serial resistor is added to the low-frequency terminal, the value of which is tuned by flattening the sensor signal response.
The SETs are DC-biased by 100 μV and readout by a transimpedance amplifier and an analog to digital converter.
§ DATA AVAILABILITY
The data is available from the authors upon reasonable request.
§ ACKNOWLEDGEMENTS
This work has been funded by the German Research Foundation (DFG) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing" (ML4Q) EXC 2004/1 - 390534769 and by the Federal Ministry of Education and Research under Contract No. FKZ: 13N14778. Project Si-QuBus received funding from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union's Horizon 2020 Programme. The device fabrication has been done at HNF - Helmholtz Nano Facility, Research Center Juelich GmbH <cit.>.
§ AUTHOR CONTRIBUTIONS
T.S., M.V. and L.V. conducted the experiments, T.S., M.V., T.O., L.V. and L.R.S. analysed the data. J.T. and R.X. fabricated the device. S.T. wrote the e-beam layers. Ł.C. derived motional narrowing effect of nuclear spins. L.R.S. designed and supervised the experiment. L.R.S and H.B. provided guidance to all authors. T.S., M.V., L.V., T.O. and L.R.S. wrote the manuscript, which was commented by all other authors.
§ COMPETING INTERESTS
L.R.S. and H.B. are co-inventors of patent applications that cover conveyor-mode shuttling and its applications. L.R.S. and H.B. are founders and shareholders of ARQUE Systems GmbH. The other authors declare no competing interest.
|
http://arxiv.org/abs/2307.04854v2 | 20230710185323 | Unconventional quantum oscillations and evidence of non-trivial electronic states in quasi-two-dimensional electron system at complex oxide interfaces | [
"Km Rubi",
"Manish Duman",
"Shengwei Zeng",
"Andrew Ammerlaan",
"Femke Bangma",
"Mun K. Chan",
"Michel Goiran",
"Ariando Ariando",
"Suvankar Chakraverty",
"Walter Escoffier",
"Uli Zeitler",
"Neil Harrison"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cond-mat.mes-hall"
] |
Corresponding author: [email protected]
National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87544 USA
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
Quantum Materials and Devices Unit, Institute of Nano Science and Technology, Mohali, Punjab 140306, India
Present address: Institute of Materials Research and Engineering (IMRE), Agency for Science, Technology and Research (A*STAR), 2 Fusionopolis Way, Innovis #08-03, Singapore 138634, Republic of Singapore
Department of Physics, National University of Singapore, 117551 Singapore
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87544 USA
Laboratoire National des Champs Magnétiques Intenses (LNCMI-EMFL), Université de Toulouse, CNRS, INSA, UPS, 143 Avenue de Rangueil, 31400 Toulouse, France
Department of Physics, National University of Singapore, 117551 Singapore
Quantum Materials and Devices Unit, Institute of Nano Science and Technology, Mohali, Punjab 140306, India
Laboratoire National des Champs Magnétiques Intenses (LNCMI-EMFL), Université de Toulouse, CNRS, INSA, UPS, 143 Avenue de Rangueil, 31400 Toulouse, France
High Field Magnet Laboratory (HFML-EMFL) and Institute for Molecules and Materials, Radboud University, 6525 ED Nijmegen, The Netherlands
National High Magnetic Field Laboratory, Los Alamos National Laboratory, Los Alamos, New Mexico 87544 USA
The simultaneous occurrence of electric-field controlled superconductivity and spin-orbit interaction makes two-dimensional electron systems (2DES) constructed from perovskite transition metal oxides promising candidates for the next generation of spintronics and quantum computing. It is, however, essential to understand the electronic bands thoroughly and verify the predicted electronic states experimentally in these 2DES to advance technological applications. Here, we present novel insights into the electronic states of the 2DES at oxide interfaces through comprehensive investigations of Shubnikov-de Haas oscillations in two different systems: EuO/KTaO_3 (EuO/KTO) and LaAlO_3/SrTiO_3 (LAO/STO). To accurately resolve these oscillations, we conducted transport measurements in high magnetic fields up to 60 T and low temperatures down to 100 mK. For 2D confined electrons at both interfaces, we observed a progressive increase of oscillations frequency and cyclotron mass with the magnetic field. We interpret these intriguing findings by considering the existence of non-trivial electronic bands, for which the E-k dispersion incorporates both linear and parabolic dispersion relations. In addition to providing experimental evidence for topological-like electronic states in KTO-2DES and STO-2DES, the unconventional oscillations presented in this study establish a new paradigm for quantum oscillations in 2DES based on perovskite transition metal oxides, where the oscillations frequency exhibits quadratic dependence on the magnetic field.
Unconventional quantum oscillations and evidence of non-trivial electronic states in quasi-two-dimensional electron system at complex oxide interfaces
Neil Harrison
August 12, 2023
======================================================================================================================================================
§ INTRODUCTION
Two-dimensional electron systems (2DES) have been observed at the surface and interface of many perovskite transition metal oxides, so-called complex oxides. Particularly, widely studied 2DES based on SrTiO_3 (STO) and KTaO_3 (KTO) exhibit various intriguing phenomena, including a large magnetoresistance, Rashba spin-orbit interaction <cit.>, 2D superconductivity <cit.>, and magnetism <cit.>, which do not exist in their bulk counterparts.
The coexistence of these phenomena gives these systems a multi-functional character, with potential applications in spintronics <cit.> as well as in the field of topological quantum computing <cit.>. However, a comprehensive understanding of the electronic structure that gives rise to these interesting phenomena remains elusive.
STO-2DES and KTO-2DES exhibit several similarities in terms of their calculated band structures. For example, the electrons occupy crystal-field split t_2g orbital of d bands (3d for STO and 5d for KTO), and the combination of 2D confinement and spin-orbit interactions gives rise to multiple bands with mixed orbital characters of d_xy, d_xz, and d_yz due to the avoided crossing between light (d_xy) and heavy (d_xz/d_yz) subbands. Heeringen et al. <cit.> predicted strongly anisotropic nonparabolic subbands for 2DES at the LAO/STO interface.
Furthermore, topological states with linear dispersion are predicted for STO-2DES in the vicinity of avoided crossing points in Γ-M direction of the first Brillouin zone <cit.>. While experiments based on the Shubnikov-de Haas (SdH) effect and angle-resolved photoemission spectroscopy (ARPES) have verified the existence of several subbands of different effective masses for both STO <cit.> and KTO-2DES <cit.>, the signature of nonparabolic subbands or topological states in these systems have not yet been perceived through experiments. Interestingly, the STO-2DES exhibits peculiar SdH oscillations that are not periodic in inverse magnetic field <cit.>. The aperiodicity in oscillations perceived in high magnetic fields has been tentatively attributed to different mechanisms (e.g., Rashba spin-orbit interaction <cit.>, Zeeman splitting <cit.>, magnetic depopulation of magnetoelectric subbands <cit.>, and magnetic field-induced change in carrier density <cit.>) in different investigations; and its physical origin has not yet reached to a consensus. Furthermore, despite a comparable electronic band structure to the STO-2DES, the existence of aperiodic SdH oscillations in KTO-2DES remains unclear from previous studies <cit.>.
In our quest to unravel the origin of aperiodic quantum oscillations and uncover topological states in STO and KTO-related 2DES, we conducted a thorough experimental investigation of the SdH oscillations at the interfaces of EuO/KTO and LaAlO_3(LAO)/STO. In order to capture the oscillations with utmost precision, we measured electrical transport in high magnetic fields (utilizing both a 60 T pulsed field and a 35 T dc field) and ultra-low temperatures (as low as 0.1 K). By examining the tilt-angle dependence of the quantum oscillations, we reveal the presence of itinerant electrons that are confined in the 2D interface region, coexisting with the carriers that disperse deeper into the STO and KTO. Interestingly, we observed that both interfaces exhibit a progressive increase in the cyclotron mass, estimated from the SdH oscillations, as well as an apparent increase of the oscillations frequency with increasing magnetic field. Notably, we found that the increase in cyclotron mass follows an almost linear trend, while the change in frequencies exhibits a quadratic relationship with the magnetic field. We explain this behavior through the existence of non-trivial electronic subbands, where the energy dispersion in k-space combines both linear and quadratic terms. These findings provide valuable insights into the unique electronic properties and subband structure at the interfaces of these oxides.
§ METHODS
As depicted in Fig.1(a) and (b), the EuO/KTO sample consists of a 10 nm thin film of EuO on KTO (001) substrate, while LAO/STO is made of ∼ 3.2 nm (8 u.c.) thin film of LAO on STO (001) substrate. Both KTO and STO substrates are 0.5 mm thick. We used a pulsed laser deposition technique to grow EuO and LAO thin films. For the LAO/STO sample, a mask of amorphous AlN was deposited on STO before LAO growth to obtain a Hall-bar patterned sample. One can find the growth details for EuO/KTO in Ref. <cit.> and for LAO/STO in Ref. <cit.>.
We carried out longitudinal and Hall resistance measurements simultaneously on the EuO/KTO sample in high pulsed magnetic fields (B_max = 60 T and pulse time ∼ 80 ms) and down to the temperature of 0.5 K in the ^3He system. We measured LAO/STO in a high continuous magnetic field (B_max = 35 T) and at low temperatures down to 0.1 K in a dilution fridge. To achieve a high signal-to-noise ratio for the measurements in a pulsed field, we used an excitation current of amplitude 30 μA and frequency up to 256 kHz. We applied a quasi DC excitation of 0.1 μA for measurements on LAO/STO in continuous magnetic fields. The measurements at different tilt angles were performed using in-situ sample rotators devised explicitly for the dilution fridge and the ^3He fridge used in the extreme environment of high magnetic fields. To probe the interface using transport measurements, we made electrical contacts for both samples using a wire bonder. In particular, we measured an unpatterned EuO/KTO sample at the lowest temperature in both up and down field directions (for details see Fig. A1(a) in Appendix A1), and attained the antisymmetric R_yx and symmetric R_xx using the formulas R_yx = R_yx(B↑)-R_yx(B↓))/2 and R_xx = R_xx(B↑)+R_xx(B↓))/2.
§ EXPERIMENTAL RESULTS
§.§ Electrical properties and quantum oscillations
Fig.1 (c) and (d) show the Hall resistance R_yx(B) for EuO/KTO and LAO/STO interfaces, respectively, measured at the lowest temperature possible for each case and in magnetic fields (B) oriented perpendicular to the interface. Except in the low field regime of 0 - 4 T, the R_yx(B) is linear for both interfaces. From the slope of the linear fit to the R_yx(B) for B > 5 T, we estimate the carrier density of 2.2 × 10^14 cm^-2 for EuO/KTO and 3.1 × 10^13 cm^-2 for LAO/STO.
However, despite having a lower effective mass <cit.>, the carriers at the EuO/KTO interface exhibit a lower Hall mobility ∼ 1500 cm^2V^-1s^-1 than thereof LAO/STO ∼ 2350 cm^2V^-1s^-1. We attribute the lower carrier mobility in EuO/KTO to the spin-disorder scattering induced by the magnetic proximity effect of EuO on the conducting TaO_2 planes at the interface <cit.>. The left y-axes of Fig.1(e) and (f) display the magnetic field dependence of longitudinal resistance R_xx(B) for LAO/STO and EuO/KTO, respectively. For both interfaces, the quantum oscillations originating from the quantization of closed cyclotron orbits are superimposed on a positive magnetoresistance. We show the oscillating resistance Δ R_xx after subtracting a smooth background (dash lines) on the right y-axes of each panel. For both interfaces, the non-monotonic enhancement of the oscillations amplitude with increasing magnetic field indicates the presence of more than one frequency, as verified by multiple peaks in the Fast Fourier Transform (FFT), which will be discussed in detail later. A two-order of magnitude smaller amplitude of the quantum oscillations in EuO/KTO than LAO/STO is consistent with a lower mobility of carriers at the EuO/KTO interface.
§.§ Tilt-angle dependence of quantum oscillations
To examine the dimensionality of the electron systems at the EuO/KTO and LAO/STO interfaces, we measured both samples at different tilt angles ranging from 0^∘ to 90^∘. The tilt angle θ, as illustrated in Fig. 2(a), is defined as the angle between the magnetic field B and the normal to the interface. For all field orientations, B is perpendicular to the current. First, it is worth mentioning that both interfaces show a large negative magnetoresistance (MR) for the in-plane field orientation, θ = 90^∘, as shown in the main panels of Fig.2(b) and 2(c). The magnitude of the negative MR is larger for LAO/STO (see Appendix A2), even though this system does not consist of any magnetic material, which could induce magnetic proximity effect on the interfacial conducting sheets, as reported for EuO/KTO <cit.>. The negative MR in complex oxide interfaces with the application of an in-plane magnetic field can be attributed to the combined effect of spin-orbit coupling and long-range impurity scattering <cit.>. Additionally, the positive MR for LAO/STO in the high magnetic fields (inset of Fig. 2(c)) can be explained by the domination of conventional orbital MR in this regime as the higher carriers mobility in LAO/STO leads to the completion of more cyclotron orbits compared to EuO/KTO.
After subtracting a smooth background from R_xx(B) measured at different tilt angles, we show Δ R_xx as a function of the total magnetic field in Fig.2(d) and (e) for EuO/KTO and LAO/STO, respectively.
Both systems show a complex shift in oscillations' minima and maxima positions at least up to θ = 45^∘. We, however, do not perceive a noticeable change in oscillations for θ > 65^∘. For both interfaces, the fixed quantum oscillations pattern in the regime of θ = 75^∘ - 90^∘, as verified by FFT analysis in Appendix A3, provides evidence for the existence of three-dimensional conduction channels.
To identify the two-dimensional (2D) nature of the electron systems, we plot Δ R_xx of EuO/KTO and LAO/STO as a function of the perpendicular component of magnetic field, Bcos(θ), in Fig. 2(f) and (g), respectively. For EuO/KTO, the low-field oscillations follow a cos(θ) scaling for θ < 30^∘, indicating the 2D confinement of conduction electrons at the interface. However, on comparing Fig. 2(d) and (e), we find the high field oscillations (> 25 T) to follow a scaling of neither B_total nor Bcos(θ), indicating the superposition of oscillations originating from 2D and 3D Fermi surfaces. In contrast, the oscillations in LAO/STO exhibit a Bcos(θ) scaling (depicted by vertical dashed lines) up to the high fields (35 T), except a few minima that might be affected by the crossover of Landau levels of multiple electronic subbands. Overall, both samples reveal a 2D confinement of electrons at the interface, along with a fraction of electrons dispersed deep into KTO and STO.
§.§ Magnetic field dependence of cyclotron mass
Since the temperature dependence of quantum oscillations amplitude provides a means for determining the effective mass, we measure both systems at different temperatures. In particular, we measure EuO/KTO at various selected temperatures for two different field orientations θ = 0^∘ and 90^∘ and show the oscillations resistance in Fig. 3(a) and (b). It is to be noted that to improve the signal-to-noise ratio in pulsed magnetic fields, the measurements on EuO/KTO at different temperatures were performed using a higher frequency (256 kHz) of excitation. The higher frequency excitation did not modify the frequency and amplitude of quantum oscillations, as compared in Fig.A1(b) of Appendix A1. As expected, the oscillations amplitude progressively decreases with increasing temperature for θ=0^∘. We, however, noticed a nonmonotonic temperature dependence of the oscillations amplitude for θ = 90^∘, most likely due to imperfect subtraction of the smooth background from the raw data. Overall, the oscillations amplitude and frequency at θ = 0^∘ are larger than those at θ = 90^∘, and therefore, we assume that the oscillations from the carriers confined at the interface dominate at θ = 0^∘.
We determine the cyclotron mass m_c by fitting the temperature dependence of oscillations amplitude to the temperature damping factor in the Lifshitz-Kosevich (L-K) equation <cit.> given below
R(T) = R_0 2π^2k_Bm_cT/ħ eB/sinh(2π^2k_Bm_cT/ħ eB)
For θ = 90^∘, we fit the maxima-to-minima difference to minimize the error in m_c induced from the imperfect background subtraction. The m_c values normalized with free electron mass m_e are displayed in Fig. 3(e). At θ = 0^∘, m_c = 0.56 ± 0.04 m_e in moderate fields (10-14 T) is comparable to the effective mass for heavy subbands (0.50 m_e) predicted theoretically <cit.> and confirmed with ARPES experiments <cit.> and SdH oscillations measurements <cit.> on KTO-2DES.
Most interestingly, above 14 T, m_c for θ = 0^∘ increases almost linearly, as depicted by a dashed line, with increasing magnetic field strength. We, however, did not observe such a progressive field-dependent enhancement in m_c values at θ = 90^∘. The average m_c at θ = 90^∘ is 0.61± 0.07 m_e, which corresponds well with the effective mass of the heavy band of bulk KTO <cit.>.
Next, we analyze the oscillations for LAO/STO measured at different temperatures in the range of 0.1 - 3.0 K (Fig. 4(a)). To ensure that the analysis is not primarily influenced by the superimposition of oscillations of multiple frequencies, we estimate m_c for this system by fitting not only the temperature dependence of the oscillations amplitude (Fig. 4(b)) but also the FFT amplitude (Appendix A4) to Eq. (1). We attribute the large discrepancy between the low-field m_c values determined from the oscillations amplitude and the FFT amplitude to the superimposition effect and poor resolution of oscillations in low fields (B < 14 T). Very similar to EuO/KTO, the lowest value of m_c (1.6 ± 0.1 m_e) corresponds to the heavy subband of STO-2DES <cit.>. Interestingly, m_c for LAO/STO also increases with the magnetic field, bearing a resemblance with the data reported by Y. Xie et al.<cit.> in the moderate field range (4 - 15 T).
In conclusion, both interfaces exhibit a progressive enhancement of the cyclotron mass for θ = 0^∘ as the magnetic field intensifies. Since both the cyclotron mass m_c (= ħ^2/2π∂ A_k/∂ E) and the frequency of the quantum oscillations F (=ħ/2π eA_k) are related to the k-space area enclosed by the cyclotron orbit A_k, we next examine any eventual variation of the oscillations periodicity with magnetic field.
§.§ Aperiodicity in quantum oscillations
From the semiclassical theory of Landau quantization developed by Onsager and Lifshitz <cit.>, the oscillations in magnetoresistivity are periodic in 1/B. To examine the periodicity of oscillations in the 2DES at EuO/KTO and LAO/STO interfaces, we plot Δ R_xx at θ = 0^∘ as a function of the inverse magnetic field in Fig. 5(a) and (b), respectively. As a quality check of the oscillations in LAO/STO in a low-field regime (< 14 T), we also measure the same sample in a superconducting magnet at a temperature of 0.3 K and display this data in the inset as well as in the main panel of Fig. 5(b). While for B > 6 T, the minima and maxima of the oscillations from these measurements perfectly overlie with the high-field measurement data, we acquired better-resolved oscillations in low-fields (B < 6 T). As one can see, for both interfaces, the oscillations period decreases as the magnetic field increases. We perform the FFT analysis for both systems in a few selected field ranges to evaluate the magnetic field dependence of the oscillations frequencies. The FFT spectra of EuO/KTO (Fig. 5(c)) reveal one or two peaks in each field window and the dominant peak position moves to higher frequency with decreasing average inverse field of selected windows. The shoulder peak (on the left of dominant peak) noticed in two field windows (8.3 - 14.9 T and 27.8 - 59.3 T) are most likely from the 3D oscillations as the FFT of oscillations at θ = 90^∘ produces peaks at the same frequencies (see Appendix A3). Unlike EuO/KTO, the LAO/STO interface exhibits at least two prominent peaks for each field window and these peaks shifts to higher frequency with increasing field. In Fig. 5(e) and 5(f), we plot the estimated frequencies as a function of the effective field B_eff for EuO/KTO and LAO/STO, respectively. B_eff defines as 1/B_eff=1/B_min+1/B_max/2 depends on the size of the field range used in the FFT analysis. It is worth mentioning that the FFT analysis of the data at θ = 0^∘ in the full field range gives 7-8 peaks for both interfaces (Appendix A5, Fig A5) because of the progressive increase in oscillations frequency with field. Contrary to the observation at θ = 0^∘, the FFT analysis of the oscillations at θ = 90^∘ reveals only two frequencies (appendix A3, Fig A3).
In conclusion, the FFT analyses for the data at θ = 0^∘ reveal that the 2DESs at the studied interfaces exhibit a continuous increase in quantum oscillations frequency as the magnetic field strength rises, in line with the previous observation on LAO/STO interface <cit.>.
§ INTERPRETATION OF UNCONVENTIONAL FINDINGS FROM QUANTUM OSCILLATIONS
The shared unconventional findings from the analysis of the quantum oscillations in 2DES at the EuO/KTO and LAO/STO interfaces are as follows: (1) The cyclotron mass estimated at low field values is comparable to the effective mass for heavy subbands. (2) Both the cyclotron mass and the frequency of the oscillations increase with the magnetic field.
The quantum oscillations resolved only from the heavy subbands can be attributed to the low carrier mobility in the light subbands. Despite a lighter effective mass, electrons in the light subbands, mainly composed of d_xy orbitals, exhibit reduced mobility due to their existence in the interface-adjacent planes (TiO_2 planes for LAO/STO and TaO_2 planes for EuO/KTO), which typically experience significant disorder (e.g., intermixed ions and dislocation) induced during the growth of the top oxide layers.
Of particular interest are the mass enhancement and the large cyclotron mass observed at high magnetic fields. In the case of EuO/KTO, the cyclotron mass reaches approximately 1.8 m_e, while in LAO/STO, it reaches around 3.0 m_e. These values cannot be explained solely based on the predicted mass of electronic subbands <cit.> or magnetic breakdown <cit.>. While the magnetic-field-induced change in density or chemical potential can reasonably explain the increase in oscillations frequency, the mass enhancement contradicts this scenario if the electronic bands follow a parabolic dispersion relation, for which ∂ A_k/∂ E is constant. Therefore, the magnetic-field-induced simultaneous change in frequency and cyclotron mass (i.e. change in A_k and ∂ A_k/∂ E) implies a correction to the parabolic dispersion of the electronic bands. Since for STO-2DES, a linear E-k dispersion is predicted at the avoided crossings of the light and heavy subbands along Γ M direction<cit.>, we consider a combination of linear and parabolic dispersion relation to interpret the B dependence of A_k and ∂ A_k/∂ E.
Combining the parabolic and linear dispersion terms, the Hamiltonian for a 2DES in a magnetic field perpendicular to its plane will be <cit.>
H = Π^2/2m+ v_F(Π_xσ_y + Π_yσ_x) - 1/2gμ_B Bσ_z
where Π_i = ħ k_i+ eA_i, m is the density of states (DOS) mass, v_F is the Fermi velocity, σ_i are the Pauli matrices, g is the Landé g-factor, and μ_B is the Bohr magneton.
The associated Landau levels for the Hamiltonian in Eq. (2) will be <cit.>
E_N = ħω_SN±√((ħω_D)^2N+(ħω_S/2 - gμ_BB/2)^2)
where ω_S = eB/m, ω_D = √(2ev_F^2B/ħ), and N is the Landau level index.
Taking E_N = E_F and converting Eq. (3) as a quadratic equation for N, we get
(ħω_S)^2N^2 - [2ħω_S E_F + (ħω_D)^2]N +
E_F^2-1/4(ħω_S - gμ_BB)^2 = 0
and by solving Eq. (4) for N, we have
N = m^2v_F^2/eħ+mE_F/eħ/B - √((m^2v_F^2/eħ)^2+2mE_F(mv_F/eħ)^2+1/4(1-mgμ_B/eħ)^2B^2)/B .
Considering that the first two terms in the square root of Eq. (5) are larger then the last one owing to the heavy DOS mass in EuO/KTO and LAO/STO, we perform Taylor's expansion of the square root, and get an approximate expression for the Landau level index N:
N ∼F_0/B + C × B + .......
where
F_0=m^2v_F^2/eħ(1+E_F/mv_F^2-√(1+2E_F/mv_F^2))
and
C = -eħ/8m^2v_F^2(1-mgμ_B/eħ)^2/√(1+2E_F/mv_F^2)
To be noted, Eq. (6) is the well-known Onsager’s relation <cit.> with an additional term C × B that brings a deviation of the oscillations periodicity from 1/B and leads to a non-linear Landau plot, i. e., a plot of Landau level index as a function of the inverse magnetic field. Constructing the Landau plot from the oscillations for two or more different frequencies (EuO/KTO data for B > 30 T in Fig. 5(a) and LAO/STO data in full-field range in Fig. 5(b)) is not feasible. We, therefore, display the Landau plot for EuO/KTO with reasonably good fit to Eq. (6) only for B < 30 T in Fig. 6(a) and list fitting parameters in Table 1.
In order to determine a relationship between the oscillations frequency and magnetic field, we make a first order derivative of N as a function of 1/B, i.e.
F = ∂ N/∂ (1/B) = F_0 - C × B^2
Next, we apply this phenomenological model to the frequency extracted from the FFT analysis and show the best fit of the experimental data to Eq. (9) in the Fig. 5(e) and (f). Interestingly, the fitting parameters F_0 and C for EuO/KTO extracted from two different methods of analyzing quantum oscillations, the Landau plot and the FFT, are comparable (Table 1). Further, as displayed in Table 1, the carrier density calculated from the oscillations frequencies is smaller than the Hall carrier density for both interfaces, in line with previous reports <cit.>.
Next, to examine the field dependence of the cyclotron mass, we estimate the energy difference between two consecutive Landau levels, as given below
E_N+1-E_N = ħω_c^*.
where ω_c^* = eB/m_c^* is the cyclotron frequency and m_c^* is the cyclotron mass, including linear and parabolic dispersion as given in Hamiltonian in Eq. (2). It is important to note that the L-K formula in Eq. (1) is based on the effective mass theory with a parabolic dispersion. In the case of nonparabolic dispersion, the cyclotron mass extracted from the L-K analysis or cyclotron resonance naturally exhibits a dependence on energy and magnetic fields <cit.>.
By substituting E_N and E_N+1 in Eq. (10) and treating ħω_D as a correction to ħω_S, we get an approximate expression for ω_c^*:
ω_c^* = ω_S+ħω_D^2/2(ħω_S - gμ_BB)
Using ω_c^* = eB/m_c^*, ω_S = eB/m, and ω_D = √(2ev_F^2B/ħ), we obtain
1/m_c^* = 1/m+mv_F^2/(ħ e - gμ_B m)1/ B
This expression for the cyclotron mass for g = 0 is the same as derived directly using the cyclotron mass definition m_c=ħ^2/2π∂ A_k/∂ E, where A_k = π k^2 and E = ħ^2k^2/2m+ħ v_Fk (see Appendix A6). By fitting the experimental m_c(B) data for EuO/KTO to Eq. (12) in Fig. 6(b), we get the Fermi velocity v_F ∼ 2 × 10^4 m/s, one order of magnitude smaller than that for Dirac fermions in topological materials <cit.>.
§ DISCUSSION AND CONCLUSION
After establishing that the peculiar findings in this study, namely the enhancement in cyclotron mass and oscillations frequency with the magnetic field, can be reasonably explained by considering a summation of linear and parabolic dispersion as described in Eq. (2), we now explore the potential origin of this non-ideal E-k dispersion close to the Fermi level.
The spin-orbit interaction is one of the key elements that can modify the parabolic bands, even in more conventional semiconductor heterointerfaces. For instance, in GaAs/Al_0.3Ga_0.7As heterostructures, the quasi-2D hole system experiences nonparabolic valence bands (with higher-order corrections in k) due to the anticrossings of light and heavy hole subbands <cit.>. In the case of STO(001)-2DES, the spin-orbit interaction leads to partial avoidance of band crossings between the light (d_xy) and heavy (d_xz and d_yz) bands along ΓM, resulting in an orbital dispersion reminiscent of Dirac cones <cit.>. Additionally, density functional theory predicts the existence of non-trivial topological states at the avoided crossings of subbands in the EuO/KTO(001) interface <cit.>. Given that there are four such points in the Brillouin zone of the STO and KTO 2DES where a Dirac-like dispersion occurs, it is plausible that electrons orbiting within the electronic states reconstructed from the combination of d_xy and d_xz/d_yz will encounter an unusual summation of linear and parabolic dispersion. Furthermore, the observed similarity in the oscillations aperiodicity in the studied systems and the surface states of topological insulators <cit.> hints at the existence of unique electronic states in the oxides-based 2DES, possibly related to topological effects. We note that the Hamiltonian stated in Eq. (2) bears a striking resemblance to the Hamiltonian of a 2D electron gas with Rashba spin-orbit coupling and a Zeeman effect <cit.>. Therefore, the obtained results from our analysis can be fairly interpreted using the Rashba model as well.
In summary, to gain a deeper understanding of the electronic band structure of the 2DES based on STO and KTO, we conducted a thorough investigation of quantum oscillations in magnetoresistance of high-mobility LAO/STO and EuO/KTO interfaces. By analyzing the observed oscillations at various tilt angles, we identified that both interfaces exhibit electron confinement in the two-dimensional plane at the interface, while a portion of carriers extends deep into the STO and KTO. Remarkably, for both interfaces, the oscillations originating from the 2D confined electrons displayed an increased frequency and cyclotron mass with increasing magnetic field strength. To explain these findings, we propose a scenario involving a combination of linear and parabolic dispersion relations. The presence of both types of dispersion relations reasonably explain the experimental observations and indicates the existence of non-trivial electronic states, possibly related to the topological effects. Furthermore, these interesting results shed light on the topological states predicted recently <cit.> and their experimental realization through anomalous effects in transport measurements <cit.> on the 2DES based on related perovskite oxides.
§ ACKNOWLEDGEMENTS
We acknowledge support from the National High Magnetic Field Laboratory, supported by the National Science Foundation through NSF/DMR-1644779 and the state of Florida. K.R., M.K.C. and N. H. and pulsed field measurements were supported by the US Department of Energy "Science of 100 Tesla" BES program. We acknowledge the support of HFML-RU/FOM, member of the European Magnetic Field Laboratory. M.K.C. acknowledges support from NSF IR/D program while serving at the National Science Foundation. Any opinion, findings, and conclusions or recommendations expressed in these materials are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
§ APPENDICES
§.§ Magnetotransport details for EuO/KTO
In order to check the data symmetry in up and down fields for the unpatterned EuO/KTO sample, we measured transport on this sample in both field directions at the lowest possible temperature T = 0.7 K and in the field perpendicular to the interface. The R_xx and R_yx (Fig. A1(a)) both show asymmetry in the field. Despite the different magnitude of R_xx and R_yx, we did not see a noticeable shift in the position or amplitude of oscillations. We used the asymmterized data (Fig. 1(b)), as described in the main text, to determine the carrier density and the mobility. Furthermore, since the high frequency of excitation improves the signal-to-noise ratio of the data measured in the pulsed magnetic field, we check its implication on the amplitude and frequency of oscillations. As displayed in Fig. A1(b), we do not observe any noticeable change in the oscillations pattern except that the oscillations quality improves by increasing the frequency of excitation.
§.§ In-plane magnetoresistance
§.§ FFT analysis of quantum oscillations at angles close to θ = 90^∘
To further verify that the position of SdH oscillations does not move by varying the angles in the vicinity of θ = 90^∘, we performed FFT analysis of the Δ R_xx(1/B) at a few angles. As shown in Fig A3, the position of the prominent peaks for both interfaces does not move with the angle.
§.§ L-K fit to FFT amplitude for LAO/STO
In order to evaluate the cyclotron mass m_c from the FFT spectra in A4(a), we fit the temperature dependence of FFT amplitude with L-K equation given below:
X(T) = X_0 2π^2k_Bm_cT/ħ eB_eff/sinh(2π^2k_Bm_cT/ħ eB_eff)
where 1/B_eff=1/B_min+1/B_max/2. The calculated m_c values are displayed in Fig. A4(b) and (c) for frequencies F_1 and F_2, respectively.
§.§ FFT analysis of quantum oscillations in full-field range
§.§ Cyclotron mass in the case of combined linear and parabolic dispersion
Combining the linear and parabolic dispersion term, we have E = ħ^2k^2/2m+ħ v_Fk. The area of the Fermi surface in k space can be given as A_k = π k^2. Substituting E and A_k values in the cyclotron mass formula, we get
1/m_c=2π/ħ^21/∂ A_k/∂ E = 1/m+v_F/ħ k
As we know, ω_c = v/r = eB/m. Converting r into reciprocal space, we have k ≈eB/mv_F. Substituting k value into Eq. (A2), we get
1/m_c≈1/m+mv_F^2/e ħ1/B
48
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Caviglia et al.(2010)Caviglia, Gabay, Gariglio, Reyren, Cancellieri, and Triscone]PhysRevLett.104.126803
author author A. D. Caviglia, author M. Gabay,
author S. Gariglio, author N. Reyren, author
C. Cancellieri, and author
J.-M. Triscone, @noop journal journal Phys. Rev. Lett. volume 104, pages 126803 (year
2010)NoStop
[King et al.(2012)King,
He, Eknapakul, Buaphet,
Mo, Kaneko, Harashima,
Hikita, Bahramy, Bell,
Hussain, Tokura, Shen,
Hwang, Baumberger, and Meevasana]PhysRevLett.108.117602
author author P. D. C. King, author R. H. He, author T. Eknapakul,
author P. Buaphet, author S.-K. Mo, author
Y. Kaneko, author S. Harashima, author Y. Hikita, author M. S. Bahramy, author C. Bell, author Z. Hussain,
author Y. Tokura, author Z.-X. Shen, author
H. Y. Hwang, author
F. Baumberger, and author
W. Meevasana, https://doi.org/10.1103/PhysRevLett.108.117602 journal
journal Phys. Rev. Lett. volume 108, pages 117602 (year 2012)NoStop
[Wadehra et al.(2020)Wadehra, Tomar, Varma, Gopal, Singh, Dattagupta, and Chakraverty]wadehra2020planar
author author N. Wadehra, author R. Tomar,
author R. M. Varma, author R. Gopal, author
Y. Singh, author S. Dattagupta, and author S. Chakraverty, @noop journal
journal Nature communications volume
11, pages 1 (year 2020)NoStop
[Li et al.(2011)Li,
Richter, Mannhart, and Ashoori]li2011coexistence
author author L. Li, author C. Richter,
author J. Mannhart, and author R. Ashoori, @noop
journal journal Nature physics volume 7, pages 762 (year
2011)NoStop
[Bert et al.(2011)Bert,
Kalisky, Bell, Kim,
Hikita, Hwang, and Moler]bert2011direct
author author J. A. Bert, author B. Kalisky,
author C. Bell, author
M. Kim, author Y. Hikita, author H. Y. Hwang, and author K. A. Moler, @noop journal journal Nature physics volume 7, pages 767 (year 2011)NoStop
[Ueno et al.(2011)Ueno,
Nakamura, Shimotani, Yuan,
Kimura, Nojima, Aoki,
Iwasa, and Kawasaki]ueno2011discovery
author author K. Ueno, author S. Nakamura,
author H. Shimotani, author H. Yuan, author
N. Kimura, author T. Nojima, author H. Aoki, author Y. Iwasa, and author M. Kawasaki, @noop journal journal
Nature nanotechnology volume 6, pages
408 (year 2011)NoStop
[Chen et al.(2021)Chen,
Liu, Sun, Chen, Liu, Zhang, Li, Zhang,
Hong, Ren et al.]chen2021two
author author Z. Chen, author Z. Liu, author Y. Sun, author
X. Chen, author Y. Liu, author H. Zhang, author H. Li, author M. Zhang, author
S. Hong, author T. Ren, et al., @noop journal journal Physical Review Letters volume 126, pages 026802 (year
2021)NoStop
[Liu et al.(2021)Liu,
Yan, Jin, Ma, Hsiao, Lin, Bretz-Sullivan, Zhou, Pearson, Fisher et al.]liu2021two
author author C. Liu, author X. Yan, author D. Jin, author
Y. Ma, author H.-W. Hsiao, author Y. Lin, author T. M. Bretz-Sullivan, author X. Zhou, author J. Pearson, author B. Fisher,
et al., @noop journal journal
Science volume 371, pages 716
(year 2021)NoStop
[Zhang et al.(2018)Zhang,
Yun, Zhang, Zhang,
Ma, Yan, Wang, Li, Li, Khan et al.]zhang2018high
author author H. Zhang, author Y. Yun, author X. Zhang, author
H. Zhang, author Y. Ma, author X. Yan, author F. Wang, author G. Li, author
R. Li, author T. Khan, et al., @noop journal journal Physical review letters volume 121, pages 116803 (year
2018)NoStop
[Noël et al.(2020)Noël, Trier, Arche, Bréhin, Vaz, Garcia, Fusil, Barthélémy, Vila,
Bibes et al.]noel2020non
author author P. Noël, author F. Trier,
author L. M. V. Arche, author J. Bréhin, author
D. C. Vaz, author V. Garcia, author S. Fusil, author A. Barthélémy, author L. Vila, author M. Bibes, et al., @noop journal journal Nature volume 580, pages
483 (year 2020)NoStop
[Vicente-Arche et al.(2021)Vicente-Arche, Bréhin, Varotto,
Cosset-Cheneau, Mallik, Salazar, Noël, Vaz, Trier, Bhattacharya et al.]vicente2021spin
author author L. M. Vicente-Arche, author J. Bréhin, author S. Varotto,
author M. Cosset-Cheneau,
author S. Mallik, author R. Salazar, author
P. Noël, author
D. C. Vaz, author F. Trier, author S. Bhattacharya, et al., @noop journal journal Advanced Materials volume 33, pages 2102102 (year
2021)NoStop
[Chung et al.(2016)Chung,
Chan, and Yao]chung2016dislocation
author author S. B. Chung, author C. Chan, and author H. Yao, @noop
journal journal Scientific reports volume 6, pages 1 (year
2016)NoStop
[Barthelemy et al.(2021)Barthelemy, Bergeal, Bibes, Caviglia, Citro, Cuoco, Kalaboukhov, Kalisky, Perroni,
Santamaria et al.]barthelemy2021quasi
author author A. Barthelemy, author N. Bergeal,
author M. Bibes, author A. Caviglia, author
R. Citro, author M. Cuoco, author A. Kalaboukhov, author B. Kalisky, author C. Perroni, author J. Santamaria, et al., @noop journal journal Europhysics Letters volume 133, pages 17001 (year
2021)NoStop
[van Heeringen et al.(2013)van Heeringen, de Wijs, McCollam,
Maan, and Fasolino]PhysRevB2013
author author L. W. van Heeringen, author G. A. de Wijs, author A. McCollam,
author J. C. Maan, and author A. Fasolino, https://doi.org/10.1103/PhysRevB.88.205140 journal journal Phys. Rev. B volume 88, pages 205140 (year 2013)NoStop
[Vivek et al.(2017)Vivek,
Goerbig, and Gabay]vivek2017
author author M. Vivek, author M. O. Goerbig, and author M. Gabay, @noop journal journal Physical Review
B volume 95, pages 165117 (year 2017)NoStop
[McCollam et al.(2014)McCollam, Wenderich, Kruize, Guduru, Molegraaf, Huijben, Koster, Blank, Rijnders, Brinkman et al.]mccollam2014quantum
author author A. McCollam, author S. Wenderich,
author M. Kruize, author V. Guduru, author
H. Molegraaf, author
M. Huijben, author G. Koster, author D. H. Blank, author G. Rijnders, author A. Brinkman,
et al., @noop journal journal
APL materials volume 2, pages 022102
(year 2014)NoStop
[Meevasana et al.(2011)Meevasana, King, He, Mo,
Hashimoto, Tamai, Songsiriritthigul, Baumberger, and Shen]meevasana2011creation
author author W. Meevasana, author P. King,
author R. He, author
S. Mo, author M. Hashimoto, author A. Tamai, author P. Songsiriritthigul, author F. Baumberger, and author Z. Shen, @noop journal journal Nature materials volume 10, pages 114 (year 2011)NoStop
[Rödel et al.(2016)Rödel, Fortuna, Sengupta, Frantzeskakis, Fèvre, Bertran,
Mercey, Matzen, Agnus,
Maroutian et al.]rodel2016universal
author author T. C. Rödel, author F. Fortuna,
author S. Sengupta, author E. Frantzeskakis, author
P. L. Fèvre, author
F. Bertran, author B. Mercey, author S. Matzen, author G. Agnus, author T. Maroutian, et al., @noop journal journal Advanced Materials volume 28, pages 1976 (year
2016)NoStop
[Rubi et al.(2021)Rubi,
Zeng, Bangma, Goiran,
Ariando, Escoffier, and Zeitler]rubi2021electronic
author author K. Rubi, author S. Zeng, author F. Bangma, author
M. Goiran, author A. Ariando, author W. Escoffier, and author U. Zeitler, @noop journal journal Physical Review Research volume 3, pages 033234 (year 2021)NoStop
[Santander-Syro et al.(2012a)Santander-Syro, Bareille, Fortuna, Copie, Gabay, Bertran, Taleb-Ibrahimi,
Le Fèvre, Herranz, Reyren
et al.]santander2012orbital
author author A. Santander-Syro, author C. Bareille, author F. Fortuna,
author O. Copie, author M. Gabay, author
F. Bertran, author A. Taleb-Ibrahimi, author P. Le Fèvre, author G. Herranz, author N. Reyren, et al., @noop journal journal Physical Review B volume 86, pages 121107 (year
2012a)NoStop
[Fête et al.(2014)Fête, Gariglio, Berthod, Li, Stornaiuolo, Gabay, and Triscone]fete2014large
author author A. Fête, author S. Gariglio,
author C. Berthod, author D. Li, author
D. Stornaiuolo, author
M. Gabay, and author
J.-M. Triscone, @noop journal journal New Journal of Physics volume 16, pages 112002 (year
2014)NoStop
[Yang et al.(2016)Yang,
Han, Torresin, Pierre,
Zeng, Huang, Venkatesan,
Goiran, Coey, Ariando, and Escoffier]MingYang2016
author author M. Yang, author K. Han, author O. Torresin, author
M. Pierre, author S. Zeng, author Z. Huang, author T. V. Venkatesan, author M. Goiran,
author J. M. D. Coey, author Ariando, and author W. Escoffier, https://doi.org/10.1063/1.4963234
journal journal Applied Physics Letters volume 109, pages 122106 (year 2016)NoStop
[Trier et al.(2016)Trier,
Prawiroatmodjo, Zhong, Christensen, von Soosten, Bhowmik,
Lastra, Chen, Jespersen, and Pryds]trier2016quantization
author author F. Trier, author G. E. Prawiroatmodjo, author Z. Zhong, author D. V. Christensen, author M. von
Soosten, author A. Bhowmik,
author J. M. G. Lastra, author Y. Chen, author
T. S. Jespersen, and author
N. Pryds, @noop journal journal Physical Review Letters volume 117, pages 096804 (year
2016)NoStop
[Cheng et al.(2018)Cheng,
Annadi, Lu, Lee,
Lee, Huang, Eom,
Irvin, and Levy]cheng2018shubnikov
author author G. Cheng, author A. Annadi,
author S. Lu, author
H. Lee, author J.-W. Lee, author M. Huang, author C.-B. Eom, author P. Irvin, and author J. Levy, @noop journal journal Physical review
letters volume 120, pages 076801
(year 2018)NoStop
[Rubi et al.(2020)Rubi,
Gosteau, Serra, Han,
Zeng, Huang, Warot-Fonrose,
Arras, Snoeck, Goiran, and Escoffier]rubi2020aperiodic
author author K. Rubi, author J. Gosteau,
author R. Serra, author K. Han, author
S. Zeng, author Z. Huang, author B. Warot-Fonrose, author R. Arras, author E. Snoeck, author M. Goiran, and author W. Escoffier, @noop journal
journal npj Quantum Materials volume
5, pages 1 (year 2020)NoStop
[Harashima et al.(2013)Harashima, Bell, Kim, Yajima, Hikita, and Hwang]harashima2013coexistence
author author S. Harashima, author C. Bell,
author M. Kim, author
T. Yajima, author Y. Hikita, and author H. Hwang, @noop journal journal Physical Review B volume 88, pages 085102 (year 2013)NoStop
[Kumar et al.(2021)Kumar,
Wadehra, Tomar, Kumar,
Singh, Dattagupta, and Chakraverty]kumar2021observation
author author N. Kumar, author N. Wadehra,
author R. Tomar, author S. Kumar, author
Y. Singh, author S. Dattagupta, and author S. Chakraverty, @noop journal
journal Advanced Quantum Technologies volume 4, pages 2000081 (year
2021)NoStop
[Yan et al.(2022)Yan,
Zeng, Rubi, Omar,
Zhang, Goiran, Escoffier, and Ariando]yan2022ionic
author author H. Yan, author S. Zeng, author K. Rubi, author
G. J. Omar, author
Z. Zhang, author M. Goiran, author W. Escoffier, and author A. Ariando, @noop journal journal Advanced Materials Interfaces , pages 2201633
(year 2022)NoStop
[Diez et al.(2015)Diez,
Monteiro, Mattoni, Cobanera,
Hyart, Mulazimoglu, Bovenzi,
Beenakker, and Caviglia]PhysRevLett.115.016803
author author M. Diez, author A. M. R. V. L. Monteiro, author G. Mattoni,
author E. Cobanera, author T. Hyart, author
E. Mulazimoglu, author
N. Bovenzi, author C. W. J. Beenakker, and author A. D. Caviglia, https://doi.org/10.1103/PhysRevLett.115.016803 journal
journal Phys. Rev. Lett. volume 115, pages 016803 (year 2015)NoStop
[Shoenberg(2009)]shoenberg2009magnetic
author author D. Shoenberg, @noop title Magnetic oscillations
in metals (publisher Cambridge university press, year 2009)NoStop
[Santander-Syro et al.(2012b)Santander-Syro, Bareille, Fortuna, Copie, Gabay, Bertran, Taleb-Ibrahimi,
Le Fèvre, Herranz, Reyren,
Bibes, Barthélémy, Lecoeur, Guevara, and Rozenberg]PhysRevB.86.121107
author author A. F. Santander-Syro, author C. Bareille, author F. Fortuna,
author O. Copie, author M. Gabay, author
F. Bertran, author A. Taleb-Ibrahimi, author P. Le Fèvre, author G. Herranz, author N. Reyren, author M. Bibes, author A. Barthélémy, author P. Lecoeur, author J. Guevara, and author M. J. Rozenberg, https://doi.org/10.1103/PhysRevB.86.121107 journal journal Phys. Rev. B volume 86, pages 121107 (year 2012b)NoStop
[Delugas et al.(2011a)Delugas, Filippetti, Fiorentini, Bilc, Fontaine, and Ghosez]Delugas2011
author author P. Delugas, author A. Filippetti,
author V. Fiorentini, author D. I. Bilc, author
D. Fontaine, and author
P. Ghosez, https://doi.org/10.1103/PhysRevLett.106.166807 journal
journal Phys. Rev. Lett. volume 106, pages 166807 (year 2011a)NoStop
[Xie et al.(2014)Xie,
Bell, Kim, Inoue,
Hikita, and Hwang]xie2014quantum
author author Y. Xie, author C. Bell, author M. Kim, author
H. Inoue, author Y. Hikita, and author H. Y. Hwang, @noop journal journal Solid state communications volume 197, pages 25 (year 2014)NoStop
[Onsager(1952)]onsager1952
author author L. Onsager, @noop journal journal The
London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science volume 43, pages 1006 (year
1952)NoStop
[Delugas et al.(2011b)Delugas, Filippetti, Fiorentini, Bilc, Fontaine, and Ghosez]delugas2011spontaneous
author author P. Delugas, author A. Filippetti,
author V. Fiorentini, author D. I. Bilc, author
D. Fontaine, and author
P. Ghosez, @noop journal journal Physical review letters volume 106, pages 166807 (year
2011b)NoStop
[Taskin and Ando(2011)]taskin2011berry
author author A. Taskin and author Y. Ando, @noop journal journal Physical Review
B volume 84, pages 035301 (year 2011)NoStop
[Tisserond et al.(2017)Tisserond, Fuchs, Goerbig, Auban-Senzier, Mézière, Batail,
Kawasugi, Suda, Yamamoto,
Kato et al.]tisserond2017aperiodic
author author E. Tisserond, author J. Fuchs,
author M. Goerbig, author P. Auban-Senzier, author
C. Mézière, author
P. Batail, author Y. Kawasugi, author M. Suda, author H. Yamamoto, author R. Kato,
et al., @noop journal journal
Europhysics Letters volume 119, pages
67001 (year 2017)NoStop
[Palik and Wallis(1961)]palik1961
author author E. Palik and author R. Wallis, @noop journal journal Physical Review volume 123, pages 131 (year
1961)NoStop
[Rössner et al.(2006)Rössner, von Känel, Chrastina,
Isella, and Batlogg]rossner2006
author author B. Rössner, author H. von
Känel, author D. Chrastina,
author G. Isella, and author B. Batlogg, @noop
journal journal Semiconductor science and
technology volume 22, pages S191
(year 2006)NoStop
[Analytis et al.(2010)Analytis, McDonald, Riggs, Chu, Boebinger, and Fisher]analytis2010
author author J. G. Analytis, author R. D. McDonald, author S. C. Riggs,
author J.-H. Chu, author G. Boebinger, and author I. R. Fisher, @noop
journal journal Nature Physics volume 6, pages 960 (year
2010)NoStop
[Liu et al.(2014)Liu,
Zhou, Zhang, Wang,
Weng, Prabhakaran, Mo,
Shen, Fang, Dai et al.]liu2014
author author Z. Liu, author B. Zhou, author Y. Zhang, author
Z. Wang, author H. Weng, author D. Prabhakaran, author S.-K. Mo,
author Z. Shen, author
Z. Fang, author X. Dai, et al., @noop journal journal Science volume
343, pages 864 (year 2014)NoStop
[Winkler()]winkler2003
author author R. Winkler, @noop title Spin-orbit coupling
effects in two-dimensional electron and hole systems, Vol. volume 191NoStop
[Kakkar and Bera(2023)]kakkar2023
author author S. Kakkar and author C. Bera, @noop journal journal Advanced Physics
Research volume 2, pages 2200026
(year 2023)NoStop
[Wright and McKenzie(2013)]wright2013
author author A. R. Wright and author R. H. McKenzie, @noop journal journal
Physical Review B volume 87, pages
085411 (year 2013)NoStop
[Gao and Niu(2017)]gao2017
author author Y. Gao and author Q. Niu, @noop journal journal Proceedings of the
National Academy of Sciences volume 114, pages 7295 (year 2017)NoStop
[Fuchs et al.(2018)Fuchs,
Piéchon, and Montambaux]RN91
author author J. N. Fuchs, author F. Piéchon, and author G. Montambaux, https://doi.org/10.21468/SciPostPhys.4.5.024 journal
journal SciPost Phys. volume 4, pages 24 (year 2018)NoStop
[Liu et al.(2022)Liu,
Liu, Ma, Wang, Li, and Chen]liu2022
author author Z. Liu, author H. Liu, author J. Ma, author
X. Wang, author G. Li, and author H. Chen, @noop journal journal npj Computational Materials volume 8, pages 208 (year 2022)NoStop
[Zou et al.(2022)Zou,
Shin, Wei, Fan, Davidson, Guo, Chen, Zou, and Cheng]zou2022
author author Y. Zou, author H. Shin, author H. Wei, author
Y. Fan, author B. A. Davidson, author E.-J. Guo, author Q. Chen, author K. Zou, and author Z. G. Cheng, @noop
journal journal npj Quantum Materials volume 7, pages 122 (year
2022)NoStop
|
http://arxiv.org/abs/2307.04044v1 | 20230708204524 | When greediness and self-confidence meet in a social dilemma | [
"Chaoqian Wang",
"Wenqiang Zhu",
"Attila Szolnoki"
] | physics.soc-ph | [
"physics.soc-ph",
"cond-mat.stat-mech",
"cs.GT",
"nlin.CG"
] |
1
.001
Chaoqian Wang et al.
mode = title]When greediness and self-confidence meet in a social dilemma
1]Chaoqian Wang
[email protected]
Conceptualization; Methodology; Writing
2]Wenqiang Zhu
Methodology; Validation
3]Attila Szolnoki
[1]
[cor1]Corresponding author
[email protected]
Conceptualization; Validation; Writing
[1]Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA
[2]Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
[3]Institute of Technical Physics and Materials Science, Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
A greedy personality is usually accompanied by arrogance and confidence. This work investigates the cooperation success condition in the context of biased payoff allocation and self-confidence. The first component allows the organizer in a spatial public goods game to receive a different proportion of goods than other participants. The second aspect influences the micro-level dynamics of strategy updates, wherein players can maintain their strategy with a certain weight. Analytical results are obtained on square lattices under the weak selection limit. If the organizer attempts to monopolize the public goods, cooperation becomes more attainable. If the confidence increases, cooperation is inhibited. Consequently, these elements have conflicting effects on cooperation, and their simultaneous presence can result in a heterogeneous change of the critical synergy factor. Our theoretical findings underscore the subtle implications of a mutual trait that may manifest as greediness or self-confidence under different circumstances, which are validated through Monte Carlo simulations.
* Examining biased allocation and self-confidence in spatial public goods game
* Calculating cooperation success conditions in weak selection limit
* Conflicting effects yield a non-monotonic critical synergy factor
* Analytical results validated via Monte Carlo simulations
Public goods game Weak selection Biased allocation Self-confidence Evolutionary game theory
[
[
=====
§ INTRODUCTION
The dynamism of various facets of reciprocity—be they direct, indirect, or network reciprocity—have been unequivocally demonstrated to wield significant influence over system behaviors, particularly when there is a need to sustain costly cooperation among self-interested, or more crudely put, selfish agents <cit.>. These mechanisms, chiefly concerned with pairwise interactions among players, have been observed to incorporate higher-order interactions <cit.>. The public goods game (PGG) is an illustrative example of such complex interactions, involving simultaneous decision-making processes through multi-body or group interactions <cit.>. Players may opt to contribute or abstain from contributing to a common pool, reaping the benefits of the overall contributions regardless of their individual decisions. In a spatial population, where players engage in limited yet enduring interactions with others, reciprocity manifests on an additional level <cit.>. Here, the intricate web of relations among agents means a player is not limited to a single game, but finds themselves immersed in several others. A pragmatic approach for a player would be to partake in the group where they serve as the central agent, encircled by proximate neighbors. Concurrently, said player also engages in games instigated by their neighbors. Consequently, a player positioned on a node with a k degree finds themselves partaking in G=k+1 PGGs. This setup could potentially underpin a reciprocal mutual aid system which promotes a degree of cooperation.
Assuming the most rudimentary scenario where players consistently maintain their strategies across all the games they participate in and disregard strategy diversity <cit.>, there still exists considerable flexibility in the implementation of a realistic model. To elaborate, groups do not necessarily correspond to a player, who may be more incentivized to invest effort in a venture they have personally initiated. Such dedication could be recognized and appreciated by the others. This could be simply expressed by allocating enhanced contributions in a biased manner. Specifically, a 0≤ w_L ≤ 1 fraction of the total income is allotted to the central player while the remaining 1-w_L is distributed among the participating neighbors. The w_L=1/G scenario represents the traditional PGG model, where the income is equally distributed among all participants. The w_L=0 limit corresponds to the situation where the central player allocates all income to the neighbors. While this may initially seem irrational, there have been empirical studies indicating the existence of similar practices in certain tribes where partners generally offer a larger share to an associate in an ultimatum game, signaling their honest intentions <cit.>. The other extreme case, w_L=1, denotes that the central player retains all the benefits. Interestingly, even this seemingly greedy scenario can reflect a cooperative intent and represent a form of mutual aid <cit.>. One can contemplate a barn constructed by an entire Amish community, yet later solely utilized by a single farmer. This study aims to explore the potential ramifications when players exhibit a specific w_L value.
The unequal distribution of collective benefits has previously been the subject of extensive investigation <cit.>. For instance, how income is allocated remains a central issue in the ultimatum game <cit.>. For the current study, however, the diverse allocation within a group comprising several participants is of greater relevance. In certain scenarios, the individual portion accrued by a participant can be strongly contingent on their investment capability <cit.>. Additionally, the heterogeneous interaction topology is a critical aspect where income allocation is proportional to an agent's weight (degree) in the graph <cit.>. In more sophisticated model configurations, players possess an extra skill and keep track of their previous round earnings <cit.>. Yet, our current model is straightforward, emphasizing the fundamental element of biased allocation. For example, it can be applied to regular graphs where players have equal-sized neighborhoods, thus participating in an equal number of joint groups. Moreover, we presuppose homogeneous players who behave similarly and apply a pre-established allocation policy in each case. This characteristic could prove to be crucial, as it has been widely observed that a heterogeneous population, wherein players are unequal, could serve as a mechanism that encourages cooperation <cit.>.
Players may differ in their views about their groups, and their approach to strategies can also be distinct. For example, they may show reluctance to alter their existing strategies, a phenomenon explained from various perspectives. This could be a result of a specific cost related to change <cit.>, or it could be interpreted as a form of self-confidence <cit.>. This strategy change inertia or updating passivity has been identified as a separate mechanism that significantly influences the evolutionary process <cit.>. To quantitatively track this effect, we introduce a 0≤ w_R ≤ 1 weight parameter, which determines the likelihood of retaining the original strategy during the elementary dynamical process. At w_R=0, this effect is completely absent, and we revert to the traditional death–birth rule <cit.>. In the opposite extreme, when w_R=1, there is no proper evaluation because all agents adamantly stick to their original strategy, despite the theoretical cooperation success condition equating to the birth-death rule as w_R→ 1 <cit.>. In between these extremes, at w_R=1/G where G denotes the group size, the strategy of the central player and the strategies of the neighbors carry equal weight and we revert to the imitation rule <cit.>.
This work simultaneously considers the aforementioned effects within the framework of PGG, with players situated on a square lattice. It is important to note that the biased allocation, which can also be interpreted as autocratic behavior, and the indifference towards alternative players representing diverse strategies, may stem from a shared trait. If an individual exhibits higher levels of autocracy and retains more public goods when they organize a group, it may also display traits of arrogance, meaning they have a high self-regard and are not prone to learning from others' strategies. Therefore, the weight factors representing these traits can be similar in size. Moreover, all the mentioned details of the proposed model are strategy-neutral, making it unclear whether they support cooperation or not. Specifically, we assume the analytically feasible weak selection limit, where payoff values merely slightly alter the reproductive fitness of competing strategies.
Our main goal is to determine the critical synergy factor for the success of cooperation based on the control parameters and to uncover the consequences of their simultaneous presence. In the next section, we will define our model, and our primary findings will be presented in Section <ref>. Monte Carlo simulations were also conducted to validate and confirm our theoretical results. The comparisons will be presented in Section <ref>. Our primary conclusions are summarized in Section <ref>, where potential implications will also be discussed.
§ MODEL
In the study of spatial population dynamics, the model utilizes an L× L square lattice with periodic boundary conditions. Hence, the total population N=L^2. Each individual, referred to as an agent, inhabits a vertex on the lattice and forms a group of G=k+1 members, comprising of itself and k of its neighbors. Consequently, each agent partakes in 1+k groups, either organized by itself or by its neighbors. The group formed by agent i is represented by Ω_i. Consequently, the collection of agent i's neighbors can be expressed as Ω_i∖{i}. The common choice of group size is G=5 (k=4, von Neumann neighborhood) or G=9 (k=8, Moore neighborhood).
During each elementary Monte Carlo step, a random agent i is selected to update its strategy s_i based on the payoff acquired from participating in the public goods games. Specifically, agent i organizes a public goods game within its group Ω_i. Each participant j∈Ω_i contributes a cost c>0 to the group if cooperating (s_j=1) or contributes nothing if defecting (s_j=0). The combined investments of all participants ∑_j∈Ω_is_j c is amplified by a synergy factor r>1 to generate the public goods, which are then distributed among group members.
Distinct from the conventional public goods game where the goods are evenly distributed, this study extends this notion by allowing the potential for uneven distribution between the organizer and other players. Specifically, the organizer is allotted a portion w_L (0≤ w_L≤ 1), while the remaining players are evenly allocated the remaining proportion 1-w_L; that is, each of the other players receives (1-w_L)/k. Hence, as the organizer, agent i receives a payoff of w_L r∑_j∈Ω_is_j c-s_i c from group Ω_i. Correspondingly, agent i also participates in groups organized by its neighbors g∈Ω_i∖{i}, receiving a payoff in those groups as a standard player. The payoff of agent i is the average over the k+1 groups, calculated by:
π_i=1/k+1{(w_L r∑_j∈Ω_is_j c-s_i c)
+
∑_g∈Ω_i∖{i}(1-w_L/kr∑_j∈Ω_gs_j c-s_i c)
}.
As underscored, Eq. (<ref>) broadens the traditional public goods game by incorporating the self-allocation parameter w_L. At w_L=0, all public goods are allocated to the other players, while at w_L=1, all public goods are allocated to the organizer. At w_L=1/G, the public goods are distributed equally, reducing Eq. (<ref>) to the traditional public goods game scenario.
In alignment with previous studies <cit.>, the payoff π_i is transformed to fitness F_i=exp(δπ_i), where δ→ 0^+ is a weak selection strength limit. Therefore, a strategy with a higher fitness has a marginal advantage to reproduce more frequently. To calculate the strategy updating probability, we also compute the payoff of agent i's neighbors and convert them to fitness in a similar manner. Consequently, the strategy of agent i is replaced by the strategy of an agent j∈Ω_i with probability W(s_i s_j), which is defined by the generalized death–birth rule <cit.>,
W(s_i s_j)
=
(1-w_R)/k· F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ,
w_R F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ,
In Eq. (<ref>), ∑_j∈Ω_iW(s_i s_j)=1 is normalized. Eq. (<ref>) extends the traditional death–birth rule <cit.> by introducing a self-learning weight w_R, following a similar logic to self-allocation. The agent i learns the strategy of agent j proportional to the fitness in the group Ω_i, taking self-learning into consideration. The case of j=i implies that agent i does not learn the strategy from others. At w_R=0, Eq. (<ref>) reduces to the traditional death–birth rule, where the fitness of agent i is disregarded. At w_R=1/G, Eq. (<ref>) simplifies to the imitation rule, where the fitness of agent i is compared equally with all neighbors. An elementary Monte Carlo step concludes once the randomly selected agent i in the system updates its strategy. A full Monte Carlo step encompasses N elementary steps, ensuring that the strategy of each agent is updated on average once.
Our model's key parameters are the weight factors, w_L and w_R, which dictate the bias in allocation and the rate of self-learning, respectively. In Fig. <ref>, we unveil the comprehensive parameter plane, highlighting the important weight values. These values have particular implications. When w_L=1, the total earnings from the communal pool are allocated solely to the focal player. Conversely, when w_L=0, every participant benefits from the pool while the focal player gains nothing. The midway scenario of w_L=1/G recaptures the traditional public goods game (PGG) where all group members equally share the proceeds from the common pool. Shifting our attention to the other weight factor, w_R=0 signifies the classic death–birth dynamics, where the new strategy of the focal player is exclusively drawn from the strategies of the neighbors. When w_R=1/G, all strategies present in the group are potential candidates in equal measure, which aligns with the well-established imitation rule. Finally, in the limit where w_R → 1, players tenaciously cling to their current strategies, thereby causing the evolution to stagnate. On the parameter plane, we also demarcate with a dotted line the trajectory where both weight factors are simultaneously altered. This trajectory represents the typical system behavior when both the effects of biased allocation and self-confidence are operative in the extended model with equal weights.
In the ensuing section, we explore and analyze how the critical synergy factor for cooperation success evolves in the presence of these skewed allocations and self-confidence biases.
§ THEORETICAL ANALYSIS
We assume that the evolutionary process begins from a state with the presence of N_C cooperative players. In essence, the initial proportion of cooperation is N_C/N. When the selection strength, denoted as δ, equals zero, the system defaults to the dynamics of the voter model <cit.>. In this state, cooperation will ultimately dominate the entire population with a probability of ρ_C=N_C/N <cit.>. Consequently, under a minimal selection strength of δ→ 0^+, if ρ_C>N_C/N, selection leans towards cooperation, which implies that evolution promotes the success of cooperative behavior. Here, ρ_C can be gauged by the average final proportion of cooperation obtained from independent runs.
Our objective in Section <ref> is to pinpoint the condition that enables the success of cooperation, while Section <ref> focuses on exploring the inherent features of this condition.
§.§ The condition for cooperation success
To discern the requisite condition for cooperation success, we utilize the identity-by-descent (IBD) method <cit.>. Initially, we introduce n-step random walks. Fundamentally, this refers to moving to a random neighbor during each 1-step random walk. The quantity after completing n-step walks is represented as x^(n), where x could be π, F, and s. The x^(n) quantity is indistinguishable among various agents since the square lattice is a vertex-transitive graph, where an agent cannot identify its location by examining the network structure.
Based on the random walks' definition, we can rewrite the payoff calculation in Eq. (<ref>) to obtain an agent's expected payoff from n steps away, as described in Eq. (<ref>),
π^(n) =1/k+1{(w_L r(k s^(n+1)+s^(n))c-s^(n)c)+k(1-w_L/k r(k s^(n+2)+s^(n+1))c-s^(n)c)}
=(w_L/k+1r-1)s^(n)c+1+(k-1)w_L/k+1rs^(n+1)c+k(1-w_L)/k+1 rs^(n+2)c,
which will later be useful for calculation.
To simplify, we assume a single initial cooperative player 1 in our analysis, implying that N_C=1 and evolution favors cooperation if ρ_C>1/N. In this scenario, the condition for cooperation success under weak selection can be rewritten as per the equivalent form <cit.> as shown in Eq. (<ref>),
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0,
where ⟨·⟩_[ δ=0; s_1=1 ] represents the expected value under neutral drift (δ=0) and single cooperator (s_1=1). ℬ_1 is the probability of agent 1 passing on its strategy to a neighbor. This occurs when a neighbor i∈Ω_1∖{1} of agent 1 is randomly selected with a 1/N probability to update the strategy and learns agent 1's strategy with a W(s_i s_1) probability. In the same vein, 𝒟_1 is the probability of agent 1's strategy being supplanted by a neighbor. This transpires when agent 1 is randomly selected with a 1/N probability to update its strategy and learns the strategy of a neighbor j∈Ω_1∖{1} with a W(s_1 s_j) probability. By applying Eq. (<ref>) and F_i=exp(δπ_i), we arrive at the equations summarized as follows:
ℬ_1 =∑_i∈Ω_1∖{1}1/NW(s_i s_1)
=∑_i∈Ω_1∖{1}1/N(1-w_R)/k·exp(δπ_1)/w_Rexp(δπ_i)+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}exp(δπ_ℓ),
𝒟_1 =1/N∑_j∈Ω_1∖{1}W(s_1 s_j)
=1/N∑_j∈Ω_1∖{1}(1-w_R)/k·exp(δπ_j)/w_Rexp(δπ_1)+(1-w_R)/k·∑_ℓ∈Ω_1∖{1}exp(δπ_ℓ).
In the further steps, we substitute Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) and compute it, as shown in Eq. (<ref>).
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0
⇔ 1-w_R/Nk(
k⟨π_1⟩_[ δ=0; s_1=1 ]
-w_R⟨∑_i∈Ω_1∖{1}π_i⟩_[ δ=0; s_1=1 ]
-1-w_R/k⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ])
-1-w_R/Nk(
-kw_R ⟨π_1⟩_[ δ=0; s_1=1 ]
+⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ]
-(1-w_R)⟨∑_ℓ∈Ω_1∖{1}π_ℓ⟩_[ δ=0; s_1=1 ])>0
⇔ ⟨π_1⟩_[ δ=0; s_1=1 ]
-2w_R/k(1+w_R)⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ]
-1-w_R/k^2(1+w_R)⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ]>0
⇔ π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0.
Following the definition of random walks starting from agent 1, we used Eq. (<ref>) in the last step of Eq. (<ref>).
π^(0)=⟨π_1⟩_[ δ=0; s_1=1 ], π^(1)=1/k⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ], π^(2)=1/k^2⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ].
To transform the strategy quantity s^(n) into walk quantity p^(n), the probability that one returns to the starting vertex after n-step random walks, we use the substitution in Eq. (<ref>), as suggested by Allen and Nowak <cit.>:
s^(n)-s^(n+1)=μ/2(Np^(n)-1)+𝒪(μ^2),
where μ→ 0^+ is an auxiliary parameter, which will be eliminated later, and 𝒪(μ^2)=0. Based on Eq. (<ref>), we can then further develop Eq. (<ref>):
s^(n)
-2w_R/1+w_Rs^(n+1)
-1-w_R/1+w_Rs^(n+2) =(s^(n)-s^(n+1))
+1-w_R/1+w_R(s^(n+1)-s^(n+2))
=μ/2(Np^(n)+1-w_R/1+w_RNp^(n+1)-2/1+w_R)
+𝒪(μ^2).
Utilizing this, we can further calculate the condition for cooperation success as given by Eq. (<ref>). First, we use Eq. (<ref>) to replace the payoff quantity π^(n) with strategy quantity s^(n). Second, we use Eq. (<ref>) to replace the strategy quantity s^(n) with walk quantity p^(n). This logic leads us to Eq. (<ref>):
π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0
⇔ (w_L/k+1r-1)s^(0)c+1+(k-1)w_L/k+1rs^(1)c+k(1-w_L)/k+1 rs^(2)c
-2w_R/1+w_R{(w_L/k+1r-1)s^(1)c+1+(k-1)w_L/k+1rs^(2)c+k(1-w_L)/k+1 rs^(3)c}
-1-w_R/1+w_R{(w_L/k+1r-1)s^(2)c+1+(k-1)w_L/k+1rs^(3)c+k(1-w_L)/k+1 rs^(4)c}>0
⇔ (w_L/k+1r-1)
(s^(0)-2w_R/1+w_Rs^(1)-1-w_R/1+w_Rs^(2))
+1+(k-1)w_L/k+1r
(s^(1)-2w_R/1+w_Rs^(2)-1-w_R/1+w_Rs^(3))
+k(1-w_L)/k+1 r
(s^(2)-2w_R/1+w_Rs^(3)-1-w_R/1+w_Rs^(4))>0
⇔ (w_L/k+1r-1)
(Np^(0)+1-w_R/1+w_RNp^(1)-2/1+w_R)
+1+(k-1)w_L/k+1r
(Np^(1)+1-w_R/1+w_RNp^(2)-2/1+w_R)
+k(1-w_L)/k+1 r
(Np^(2)+1-w_R/1+w_RNp^(3)-2/1+w_R)>0.
The walk quantity p^(n) can be directly perceived by analyzing the topology of the network structure. One remains in the starting vertex if not walking, so p^(0)=1. A single step cannot encompass leaving and returning to the starting vertex, hence p^(1)=0. On a square lattice, the probability that one returns to the starting vertex after two steps is p^(2)=1/k. Finally, the value of p^(3) varies from case to case. In short, p^(3)=0 for von Neumann neighborhood and p^(3)=3/64 for Moore neighborhood (for more details, refer to Ref. <cit.>).
By applying the previously mentioned values of p^(0)=1, p^(1)=0, and p^(2)=1/k, but retaining p^(3), we can further calculate Eq. (<ref>) to reach the final result as shown in Eq. (<ref>):
π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0
⇔ (w_L/k+1r-1)
(N-2/1+w_R)
+1+(k-1)w_L/k+1r(1-w_R/1+w_RN/k-2/1+w_R)
+k(1-w_L)/k+1 r(N/k+1-w_R/1+w_RNp^(3)-2/1+w_R)>0
⇔
r>(N-2+N w_R)(G-1)G/N(G-1)^2 (1-w_L)(1-w_R) p^(3)+N(G-2)(w_L-w_L w_R+w_R)+(N+2-2G)G≡ r^⋆.
This provides the condition r>r^⋆ for cooperation success. Notably, the critical synergy factor r^⋆ is only a function of the population N, group size G, higher-order network structure p^(3), self-allocation w_L, and updating inertia w_R.
Table <ref> summarizes the primary outcomes related to the critical synergy factor, r^⋆, along with their corresponding large population limits (N→ +∞), derived from taking specific parameters in Eq. (<ref>). Following the convention in much of the prior literature, we consider the death–birth rule (w_R=0) as the benchmark scenario. In this context, we present the reduced r^⋆ values corresponding to three distinct scenarios: equal allocation (w_L=1/G), allocation to other players (w_L=0), and allocation to the organizer (w_L=1). In addition, we explore a situation where the self-allocation and updating inertia are congruent (w_L=w_R≡ w), leading to consistency in the self-loops of allocation and updating. The trajectories of this case in the w_R-w_L parameter plane are visually represented in Fig. <ref> for an intuitive understanding.
Table <ref> offers additional insights into the main outcomes associated with the critical synergy factor, r^⋆, in relation to specific neighborhood types. We concentrate on two commonly used cases: von Neumann neighborhood and Moore neighborhood. The former, von Neumann neighborhood, lacks triangle motifs, resulting in p^(3)=0. Conversely, the latter, Moore neighborhood, is a rudimentary structure on a two-dimensional lattice that incorporates overlapping neighbors, yielding p^(3)=3/64 <cit.>.
§.§ The conflict between self-allocation and self-confidence
Utilizing the analytical expression of the critical synergy factor r^⋆, we can examine the combined impact of self-allocation w_L and self-confidence w_R on cooperation. From an intuitive perspective, a decrease in the r^⋆ value needed for cooperation success (i.e., r>r^⋆) fosters cooperation.
By referring to Eq. (<ref>), we can confirm that ∂ r^⋆/∂ w_L<0 holds for the specified neighborhood types. This indicates that an increase in self-allocation diminishes r^⋆ and thereby enhances cooperation. Fig. <ref>(a) portrays the critical synergy factor r^⋆ as a function of self-allocation w_L for von Neumann neighborhood under the condition of death–birth updating (w_R=0). Regardless of the population size, directing the public goods towards the organizer invariably stimulates cooperation.
Similarly, we find ∂ r^⋆/∂ w_R>0 for the designated neighborhood types. This suggests that an increase in self-confidence, or alternatively, an increase in updating inertia, acts to obstruct cooperation. This effect aligns with observations made in simpler models by prior studies <cit.>. With the von Neumann neighborhood and w_L=1/G, the critical synergy factor r^⋆ as a function of updating inertia is depicted in Fig. <ref>(b). Across varying population sizes, an increase in updating inertia consistently hampers cooperation.
The aforementioned observations create a fascinating dynamic when both effects coexist. Specifically, the divergent outcomes of biased allocation and self-confidence pose a question: how does the system respond when we enhance the weights of these factors simultaneously? Does it stimulate or inhibit cooperation? To explore this, we set w_L=w_R≡ w and illustrate the critical synergy factor r^⋆ as a function of w in Fig. <ref>(c). The figure reveals that an initial increase in the self-loop of allocation and strategy updating fosters cooperation, but once the weight surpasses a certain level, this effect reverses, ultimately discouraging cooperation. There exists an optimal self-loop weight w_0, which minimizes the r^⋆ value and is thus most beneficial for cooperation. We can derive the analytical expression for this optimal self-loop value by solving ∂ r^⋆/∂ w=0. The solution is given as:
w_0=1/N(
-(N-2)+√(2)√(2(N-1)^2+N(N-G)(G-1)/(G-1)^2 p^(3)-G+2)),
which is a function of population size N, group size G, and the higher-order network structure p^(3). This weight level provides the most favorable condition for the evolution of cooperation.
By setting N→ +∞ in Eq. (<ref>), we obtain the large population limit of w_0 as:
w_0=-1+√(2)√(2+G-1/(G-1)^2 p^(3)-G+2).
To provide a broader perspective on the simultaneous influences of these factors, we introduce a heat map of the critical synergy factor r^⋆ across the complete w_R-w_L parameter plane in Fig. <ref>. The diagonal dotted line within the figure represents the trajectory discussed in Fig. <ref>(c). This plot reveals certain general characteristics regarding the collective impact of self-loop effects. Specifically, the immediate effect of biased payoff allocation on the critical synergy factor is more pronounced when w_R is small, whereas the w_R dependency of r^⋆ is moderate for large w_R values. The inverse is true when considering the w_R dependency of r^⋆, as it changes more dramatically when w_L is low, while the w_R dependency remains moderate for small w_L values.
When maintaining the aforementioned diagonal trajectory, we can identify some general trends regarding the w-dependence. Specifically, we can confirm that the r^⋆ value at w=0 is consistently lower than the one at w=1, that is, .r^⋆|_w=0<.r^⋆|_w=1. Applying w=1 and w=0 in Eq. (<ref>), we find .r^⋆|_w=1=(N-1)G/(N-G) and .r^⋆|_w=0=(N-2)(G-1)G/[N(G-1)^2 p^(3)+(N+2-2G)G], respectively. Given that N(G-1)^2 p^(3)>0 always stands, we deduce .r^⋆|_w=0<(N-2)(G-1)G/[(N+2-2G)G]=[(N-1)G-(G-2)-N]/[N-G-(G-2)]. And since (N-1)G>N-G and -(G-2)<0, it follows that .r^⋆|_w=0<[(N-1)G-N]/(N-G)<(N-1)G/(N-G). Therefore, .r^⋆|_w=0<.r^⋆|_w=1 always holds true. This indicates that, on a larger scale, when both self-loop effects are significant, the outcome is dominated by the impact of self-confidence, which hinders cooperation. This effect is more pronounced in a topology containing triangle motifs, such as the Moore neighborhood where each player forms a G=9-member group with overlapping neighbors. This case is discussed in more detail in Appendix <ref>.
§ NUMERICAL SIMULATION
To validate our theoretical analysis, we performed Monte Carlo simulations. Initially, each agent is randomly assigned either cooperation or defection, such that N_C≈ N/2. Consequently, as outlined at the beginning of Section <ref>, evolution favors cooperation if ρ_C>1/2. To compute the expected cooperation level ρ_C, we permit up to 40,000 full Monte Carlo steps per run (if all agents become either cooperators or defectors, that specific run may be terminated earlier), and record the cooperation proportion at the last step as the result of each run. The expected cooperation level ρ_C is then the average across multiple independent runs. Based on our empirical exploration, for N=25, ρ_C is the average over 1,000,000 runs; for N=400, ρ_C is the average over 10,000 runs; for N=10000, ρ_C is obtained from a single run.
Using the von Neumann neighborhood, Fig. <ref> illustrates the expected cooperation level ρ_C as a function of the synergy factor r at w=0, w=0.3, and w=0.6. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) gives r^⋆=5.4118, 4.9493, 5.1351 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we get r^⋆=4.0612, 4.0280, 4.2992. In Fig. <ref>(c), where N=10000, we obtain r^⋆=4.0024, 3.9835, 4.2571. As can be observed, the cooperation level ρ_C rises with an increase in the synergy factor r, and ρ_C>0.5 when r>r^⋆, thus affirming the theoretical analysis.
§ CONCLUSION
Collaborating on a project does not necessarily equate to equal benefits from the resulting income. For instance, an individual acting as the organizer of a group may allocate a different proportion of public goods to themselves than to other participants. If everyone follows the same protocol, allocating more public goods to the organizer boosts the gains in the game managed by oneself, but simultaneously leads to fewer gains in games organized by neighbors. Consequently, the impact of biased allocation on the level of cooperation is far from a simple question. Prior studies have demonstrated that this seemingly strategy-neutral mechanism actually promotes cooperation by preventing the diffusion of public goods <cit.>.
On the other hand, if an individual allocates more public goods to themselves as an organizer, this attitude might also imply that the individual is more authoritative and confident, and less inclined to change their current strategy. Past observations have revealed that this inertia in strategy updating inhibits cooperation by slowing the aggregation of cooperators <cit.>. Thus, it can be concluded that biased allocation and strategy updating inertia play opposing roles in the evolution of cooperation.
Assuming that the measure of biased allocation and updating inertia are interconnected, this study focuses on their simultaneous presence and explores how they jointly influence cooperation. We derive a theoretical solution on a two-dimensional square lattice and identify the critical synergy factor r^⋆ required for cooperation success. Consequently, cooperators are more likely to dominate when r>r^⋆. Our primary interest lies in how r^⋆ fluctuates on the plane of weight factors, which determine biased allocation and the extent of strategy updating inertia. Upon introducing the self-loop w of allocation and updating, it initially promotes and later, for larger w values, inhibits cooperation. In this scenario, we can identify an optimal self-loop value w_0 that is most conducive to cooperation. In other cases, where the network topology contains triangle motifs, the impact of strategy inertia is more potent, thus increasing the self-loop w tends to hamper cooperation.
Moreover, we theoretically demonstrate that the cooperation threshold at w=0 is always smaller than at w=1. This suggests that the inhibitory effect of self-confidence on cooperation generally outweighs the facilitative effect of self-allocation on cooperation when the allocation and updating self-loop w takes extreme values. These observations propose that although biased allocation may appear as an unfair protocol, its impact on cooperation is decidedly not detrimental. However, the self-confidence driven strategy updating inertia is always harmful, and cannot be offset by the effect of allocation.
§ ACKNOWLEDGEMENT
A.S. was supported by the National Research, Development and Innovation Office (NKFIH) under Grant No. K142948.
§ MOORE NEIGHBORHOOD
Our primary results are summarized in Eq. (<ref>). It proposes that topology slightly influences the critical synergy factor r^⋆ through the parameter G. However, a more complex consequence is embodied in the value of p^(3). This factor creates a stark distinction between the von Neumann and Moore neighborhoods, regardless of using the same vertex-transitive square lattice. For the von Neumann neighborhood, the three-step quantity p^(3)=0, as there is no triangle motif. To explore the consequences of a non-zero p^(3), we examine the Moore neighborhood, the simplest two-dimensional lattice that contains higher-order structure where p^(3)=3/64 <cit.>.
The first two panels of Fig. <ref> confirm that the separate impacts of biased allocation and strategy updating inertia are similar to those observed for the von Neumann neighborhood. However, their combined influence on r^⋆ diverges from the previous observation, as the self-confidence-based inertia is significantly stronger in this context, making the increase of the mutual weight factor w detrimental to the success of cooperation.
This effect is generally valid and becomes evident when we compare the color-coded heat map of the critical synergy factor r^⋆ on the w_R-w_L parameter plane. The main difference between the last panels of Fig. <ref> and Fig. <ref> is the minimal change in the value of r^⋆ as we move horizontally on the parameter plane of Fig. <ref>(c). This suggests that changes in w_L have only a minimal impact on cooperation, because the value of w_R is the determining factor here.
Our final Fig. <ref> presents a comparison of the results from our analytical and numerical calculations. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) yields r^⋆=10.6154, 10.6087, 11.4000 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we obtain r^⋆=6.1546, 6.8158 for w=0, 0.3. In Fig. <ref>(c), where N=10000, we calculate r^⋆=6.0060, 6.6725, 7.5061 for w=0, 0.3, and 0.6. As before, the simulations confirm our theoretical predictions well.
53
natexlab#1#1
[#1],#1
[Nowak(2006)]nowak_s06
authorM. A. Nowak,
titleFive rules for the evolution of cooperation,
journalScience volume314
(year2006) pages1560–1563.
[Perc et al.(2013)Perc, Gómez-Gardeñes, Szolnoki, and
Floría and Y. Moreno]perc_jrsi13
authorM. Perc, authorJ. Gómez-Gardeñes,
authorA. Szolnoki, authorL. M. Floría and Y.
Moreno,
titleEvolutionary dynamics of group interactions on
structured populations: a review,
journalJ. R. Soc. Interface volume10
(year2013) pages20120997.
[Sigmund(2010)]sigmund_10
authorK. Sigmund, titleThe Calculus of Selfishness,
publisherPrinceton University Press, addressPrinceton,
NJ, year2010.
[Wang et al.(2022)Wang, Dai, He, Yu, and Shen]wang_jw_pla22
authorJ. Wang, authorW. Dai, authorJ. He,
authorF. Yu, authorX. Shen,
titlePersistent imitation paves the way for cooperation in
public goods game,
journalPhys. Lett. A volume447
(year2022) pages128302.
[Xiao et al.(2022)Xiao, Zhang, Li, Dai, and Yang]xiao_sl_epjb22
authorS. Xiao, authorL. Zhang, authorH. Li,
authorQ. Dai, authorJ. Yang,
titleEnvironment-driven migration enhances cooperation in
evolutionary public goods games,
journalEur. Phys. J. B volume95
(year2022) pages67.
[Wang and Szolnoki(2022)]wang2022reversed
authorC. Wang, authorA. Szolnoki,
titleA reversed form of public goods game: equivalence and
difference,
journalNew J. Phys. volume24
(year2022) pages123030.
[Hua and Liu(2023)]hua_sj_csf3
authorS. Hua, authorL. Liu,
titleFacilitating the evolution of cooperation through
altruistic punishment with adaptive feedback,
journalChaos, Solit. and Fract. volume173
(year2023) pages113669.
[Szolnoki et al.(2009)Szolnoki, Perc, and Szabó]szolnoki_pre09c
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleTopology-independent impact of noise on cooperation
in spatial public goods games,
journalPhys. Rev. E volume80
(year2009) pages056109.
[Yu et al.(2022)Yu, Wang, and He]yu_fy_csf22
authorF. Yu, authorJ. Wang, authorJ. He,
titleInequal dependence on members stabilizes cooperation
in spatial public goods game,
journalChaos, Solit. and Fract. volume165
(year2022) pages112755.
[Wang et al.(2021)Wang, Pan, Ju, and He]wang2021public
authorC. Wang, authorQ. Pan, authorX. Ju,
authorM. He,
titlePublic goods game with the interdependence of
different cooperative strategies,
journalChaos. Solit. and Fract. volume146
(year2021) pages110871.
[Wang and Huang(2022)]wang2022between
authorC. Wang, authorC. Huang,
titleBetween local and global strategy updating in public
goods game,
journalPhysica A volume606
(year2022) pages128097.
[Wang and Sun(2023a)]wang2023public
authorC. Wang, authorC. Sun,
titlePublic goods game across multilayer populations with
different densities,
journalChaos. Solit. and Fract. volume168
(year2023a) pages113154.
[Wang and Sun(2023b)]wang_cq_c23
authorC. Wang, authorC. Sun,
titleZealous cooperation does not always promote
cooperation in public goods games,
journalChaos volume33
(year2023b) pages063111.
[Xie et al.(2023)Xie, Liu, Wang, and Jiang]xie_k_csf23
authorK. Xie, authorX. Liu, authorH. Wang,
authorY. Jiang,
titleMulti-heterogeneity public goods evolutionary game on
lattice,
journalChaos. Solit. and Fract. volume172
(year2023) pages113562.
[Ding et al.(2023)Ding, Wang, Zhao, Gu, and Wang]ding_r_csf23
authorR. Ding, authorX. Wang,
authorJ. Zhao, authorC. Gu,
authorT. Wang,
titleThe evolution of cooperation in spatial public goods
games under a risk-transfer mechanism,
journalChaos, Solitons and Fractals volume169
(year2023) pages113236.
[Zhang et al.(2010)Zhang, Zhang, Xie, and Wang]zhang_cy_epl10
authorC. Zhang, authorJ. Zhang,
authorG. Xie, authorL. Wang,
titleDiversity of game strategies promotes the evolution
of cooperation in public goods games,
journalEPL volume90 (year2010)
pages68005.
[Henrich et al.(2001)Henrich, Boyd, Bowles, Camerer, Fehr, Gintis, and
McElreath]henrich_aer01
authorJ. Henrich, authorR. Boyd,
authorS. Bowles, authorC. Camerer,
authorE. Fehr, authorH. Gintis,
authorR. McElreath,
titleIn search of homo economicus: behavioral experiments
in 15 small-scale societies,
journalAm. Econ. Rev. volume91
(year2001) pages73–78.
[Nowak et al.(1995)Nowak, May, and Sigmund]nowak_sa95
authorM. A. Nowak, authorR. M. May,
authorK. Sigmund,
titleArithmetics of mutual help,
journalScientific American volume272
(year1995) pages76–81.
[Allen et al.(2013)Allen, Gore, and Nowak]allen2013spatial
authorB. Allen, authorJ. Gore, authorM. A.
Nowak,
titleSpatial dilemmas of diffusible public goods,
journalElife volume2 (year2013)
pagese01169.
[Su et al.(2018)Su, Wang, and Stanley]su2018understanding
authorQ. Su, authorL. Wang, authorH. E.
Stanley,
titleUnderstanding spatial public goods games on
three-layer networks,
journalNew J. Phys. volume20
(year2018) pages103030.
[Zhang et al.(2012)Zhang, Shi, Liu, and Wang]zhang_hf_pa12
authorH. Zhang, authorD. Shi, authorR. Liu,
authorB. Wang,
titleDynamic allocation of investments promotes
cooperation in spatial public goods game,
journalPhysica A volume391
(year2012) pages2617–2622.
[Cong et al.(2016)Cong, Li, Wang, and Zhao]cong_r_epl16
authorR. Cong, authorK. Li, authorL. Wang,
authorQ. Zhao,
titleCooperation induced by wise incentive allocation in
spontaneous institution,
journalEPL volume115 (year2016)
pages38002.
[Szolnoki and Chen(2020)]szolnoki_amc20
authorA. Szolnoki, authorX. Chen,
titleBlocking defector invasion by focusing on the most
successful partner,
journalAppl. Math. Comput. volume385
(year2020) pages125430.
[Wang et al.(2018)Wang, He, and Chen]wang_q_amc18
authorQ. Wang, authorN. He, authorX. Chen,
titleReplicator dynamics for public goods game with
resource allocation in large populations,
journalAppl. Math. Comput. volume328
(year2018) pages162–170.
[Bin and Yue(2023)]bin_l_amc23
authorL. Bin, authorW. Yue,
titleCo-evolution of reputation-based preference selection
and resource allocation with multigame on interdependent networks,
journalAppl. Math. Comput. volume456
(year2023) pages128128.
[Güth et al.(1982)Güth, Schmittberger, and
Schwarze]guth_jebo82
authorW. Güth, authorR. Schmittberger,
authorB. Schwarze,
titleAn experimental analysis of ultimatum bargaining,
journalJ. Econ. Behav. Org. volume3
(year1982) pages367–388.
[Sigmund et al.(2002)Sigmund, Fehr, and Nowak]sigmund_sa02
authorK. Sigmund, authorE. Fehr, authorM. A.
Nowak,
titleThe economics of fair play,
journalSci. Am. volume286
(year2002) pages82–87.
[Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_prl12
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleDefense mechanisms of empathetic players in the
spatial ultimatum game,
journalPhys. Rev. Lett. volume109
(year2012) pages078701.
[Wang et al.(2014)Wang, Chen, and Wang]wang_xf_srep14
authorX. Wang, authorX. Chen,
authorL. Wang,
titleRandom allocation of pies promotes the evolution of
fairness in the ultimatum game,
journalSci. Rep. volume4
(year2014) pages4534.
[Chen et al.(2015)Chen, Wu, Li, Wu, and Wang]chen_w_epl15
authorW. Chen, authorT. Wu, authorZ. Li,
authorN. Wu, authorL. Wang,
titleHeterogenous allocation of chips promotes fairness in
the ultimatum game,
journalEPL volume109 (year2015)
pages68006.
[Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_epl12
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleAccuracy in strategy imitations promotes the
evolution of fairness in the spatial ultimatum game,
journalEPL volume100 (year2012)
pages28005.
[Fan et al.(2017)Fan, Zhang, Luo, and Zhang]fan_rg_pa17
authorR. Fan, authorY. Zhang, authorM. Luo,
authorH. Zhang,
titlePromotion of cooperation induced by heterogeneity of
both investment and payoff allocation in spatial public goods game,
journalPhysica A volume465
(year2017) pages454–463.
[Peng et al.(2010)Peng, Yang, Wang, Chen, and Wang]peng_d_epjb10
authorD. Peng, authorH.-X. Yang, authorW.-X.
Wang, authorG. R. Chen, authorB.-H. Wang,
titlePromotion of cooperation induced by nonuniform payoff
allocation in spatial public goods game,
journalEur. Phys. J. B volume73
(year2010) pages455–459.
[Meloni et al.(2017)Meloni, Xia, and Moreno]meloni_rsos17
authorS. Meloni, authorC.-Y. Xia,
authorY. Moreno,
titleHeterogeneous resource allocation can change social
hierarchy in public goods games,
journalR. Soc. open sci. volume4
(year2017) pages170092.
[Perc and Szolnoki(2008)]perc_pre08
authorM. Perc, authorA. Szolnoki,
titleSocial diversity and promotion of cooperation in the
spatial prisoner's dilemma game,
journalPhys. Rev. E volume77
(year2008) pages011904.
[Santos et al.(2008)Santos, Santos, and Pacheco]santos_n08
authorF. C. Santos, authorM. D. Santos,
authorJ. M. Pacheco,
titleSocial diversity promotes the emergence of
cooperation in public goods games,
journalNature volume454
(year2008) pages213–216.
[Szabó and Hauert(2002)]szabo_prl02
authorG. Szabó, authorC. Hauert,
titlePhase transitions and volunteering in spatial public
goods games,
journalPhys. Rev. Lett. volume89
(year2002) pages118101.
[Li et al.(2016)Li, Szolnoki, Cong, and Wang]li_k_srep16
authorK. Li, authorA. Szolnoki,
authorR. Cong, authorL. Wang,
titleThe coevolution of overconfidence and bluffing in the
resource competition game,
journalSci. Rep. volume6
(year2016) pages21104.
[Szolnoki and Chen(2018)]szolnoki_pre18
authorA. Szolnoki, authorX. Chen,
titleReciprocity-based cooperative phalanx maintained by
overconfident players,
journalPhys. Rev. E volume98
(year2018) pages022309.
[Wang and Szolnoki(2023)]wang2023evolution
authorC. Wang, authorA. Szolnoki,
titleEvolution of cooperation under a generalized
death-birth process,
journalPhys. Rev. E volume107
(year2023) pages024303.
[Szolnoki et al.(2009)Szolnoki, Perc, Szabó, and
Stark]szolnoki_pre09
authorA. Szolnoki, authorM. Perc,
authorG. Szabó, authorH.-U. Stark,
titleImpact of aging on the evolution of cooperation in
the spatial prisoner's dilemma game,
journalPhys. Rev. E volume80
(year2009) pages021901.
[Liu et al.(2010)Liu, Rong, Jia, and Wang]liu_rr_epl10
authorR.-R. Liu, authorZ. Rong, authorC.-X.
Jia, authorB.-H. Wang,
titleEffects of diverse inertia on scale-free-networked
prisoner's dilemma games,
journalEPL volume91 (year2010)
pages20002.
[Zhang et al.(2011)Zhang, Fu, Wu, Xie, and Wang]zhang_yl_pre11
authorY. Zhang, authorF. Fu, authorT. Wu,
authorG. Xie, authorL. Wang,
titleInertia in strategy switching transforms the strategy
evolution,
journalPhys. Rev. E volume84
(year2011) pages066103.
[Wang and Szolnoki(2023)]wang2023inertia
authorC. Wang, authorA. Szolnoki,
titleInertia in spatial public goods games under weak
selection,
journalAppl. Math. Comput. volume449
(year2023) pages127941.
[Wang et al.(2023)Wang, Zhu, and Szolnoki]wang2023conflict
authorC. Wang, authorW. Zhu,
authorA. Szolnoki,
titleThe conflict between self-interaction and updating
passivity in the evolution of cooperation,
journalChaos, Solit. and Fract. volume173
(year2023) pages113667.
[Ohtsuki and Nowak(2006)]ohtsuki_jtb06
authorH. Ohtsuki, authorM. A. Nowak,
titleThe replicator equation on graphs,
journalJ. Theor. Biol. volume243
(year2006) pages86–97.
[Nowak et al.(2004)Nowak, Sasaki, Taylor, and Fudenberg]nowak_n04b
authorM. A. Nowak, authorA. Sasaki,
authorC. Taylor, authorD. Fudenberg,
titleEmergence of cooperation and evolutionary stability
in finite populations,
journalNature volume428
(year2004) pages646–650.
[McAvoy et al.(2020)McAvoy, Allen, and Nowak]mcavoy2020social
authorA. McAvoy, authorB. Allen, authorM. A.
Nowak,
titleSocial goods dilemmas in heterogeneous societies,
journalNat. Human Behav. volume4
(year2020) pages819–831.
[Clifford and Sudbury(1973)]clifford1973model
authorP. Clifford, authorA. Sudbury,
titleA model for spatial conflict,
journalBiometrika volume60
(year1973) pages581–588.
[Cox and Griffeath(1983)]cox1983occupation
authorJ. T. Cox, authorD. Griffeath,
titleOccupation time limit theorems for the voter model,
journalAnnals Prob. (year1983)
pages876–893.
[Cox and Griffeath(1986)]cox1986diffusive
authorJ. T. Cox, authorD. Griffeath,
titleDiffusive clustering in the two dimensional voter
model,
journalAnnals Prob. (year1986)
pages347–370.
[Allen and Nowak(2014)]allen2014games
authorB. Allen, authorM. A. Nowak,
titleGames on graphs,
journalEMS Surv. Math. Sci. volume1
(year2014) pages113–151.
[Nowak et al.(2010)Nowak, Tarnita, and Wilson]nowak2010evolution
authorM. A. Nowak, authorC. E. Tarnita,
authorE. O. Wilson,
titleThe evolution of eusociality,
journalNature volume466
(year2010) pages1057–1062.
|
http://arxiv.org/abs/2307.04227v1 | 20230709163906 | Relaxed Equilibria for Time-Inconsistent Markov Decision Processes | [
"Erhan Bayraktar",
"Yu-Jui Huang",
"Zhenhua Wang",
"Zhou Zhou"
] | math.OC | [
"math.OC",
"60J10, 60J27, 91A11"
] |
equationsection
DefinitionDefinition[section]
RemarkRemark[section]
TheoremTheorem[section]
LemmaLemma[section]
PropositionProposition[section]
CorollaryCorollary[section]
AssumptionAssumption[section]
ExampleExample[section]
|
http://arxiv.org/abs/2307.04520v1 | 20230710124155 | Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor | [
"San Jiang",
"Yichen Ma",
"Qingquan Li",
"Wanshou Jiang",
"Bingxuan Guo",
"Lelin Li",
"Lizhe Wang"
] | cs.CV | [
"cs.CV"
] |
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
San Jiang,
Yichen Ma,
Qingquan Li,
Wanshou Jiang,
Bingxuan Guo,
Lelin Li,
and Lizhe Wang
S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang)
Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected].
W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected].
L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science
and Technology, Xiangtan 411201, China. E-mail: [email protected].
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
SfM (Structure from Motion) has been extensively used for UAV (Unmanned Aerial Vehicle) image orientation. Its efficiency is directly influenced by feature matching. Although image retrieval has been extensively used for match pair selection, high computational costs are consumed due to a large number of local features and the large size of the used codebook. Thus, this paper proposes an efficient match pair retrieval method and implements an integrated workflow for parallel SfM reconstruction. First, an individual codebook is trained online by considering the redundancy of UAV images and local features, which avoids the ambiguity of training codebooks from other datasets. Second, local features of each image are aggregated into a single high-dimension global descriptor through the VLAD (Vector of Locally Aggregated Descriptors) aggregation by using the trained codebook, which remarkably reduces the number of features and the burden of nearest neighbor searching in image indexing. Third, the global descriptors are indexed via the HNSW (Hierarchical Navigable Small World) based graph structure for the nearest neighbor searching. Match pairs are then retrieved by using an adaptive threshold selection strategy and utilized to create a view graph for divide-and-conquer based parallel SfM reconstruction. Finally, the performance of the proposed solution has been verified using three large-scale UAV datasets. The test results demonstrate that the proposed solution accelerates match pair retrieval with a speedup ratio ranging from 36 to 108 and improves the efficiency of SfM reconstruction with competitive accuracy in both relative and absolute orientation.
structure from motion, 3D reconstruction, match pair selection, unmanned aerial vehicle, feature matching
§ INTRODUCTION
UAV (Unmanned aerial vehicle) images have become one of the primary data sources for surveying and mapping in photogrammetry and remote sensing (RS). Compared with satellite and aerial-based RS platforms, UAVs have the characteristics of high flexibility, high timeliness, and high resolution <cit.>. UAV images have been widely exploited in various applications, e.g., urban 3D modeling <cit.>, transmission line inspection <cit.>, and precision agriculture management <cit.>. With the increasing endurance of UAV platforms and the explosive usage of multi-camera instruments, efficient image orientation for large-scale UAV images has become one of the most critical modules for photogrammetric systems <cit.>.
SfM (Structure from Motion) has become a well-known technology for recovering camera poses and 3D points without the requirement of their good initial values <cit.>. SfM has been extensively adopted in 3D reconstruction <cit.> for both ordered and unordered UAV images. In the workflow of SfM, a view graph is a basic structure to guide feature matching and parameter solving, which is defined as an undirected weighted graph with the vertices and edges indicating images and their overlap relationships <cit.>. Retrieving match pairs is pre-required in view graph construction. The purpose of match pair retrieval is to find overlapped image pairs to guide subsequent feature matching, which increases the reliability and efficiency of SfM reconstruction. Thus, retrieving appropriate match pairs efficiently and accurately becomes one of the core issues in SfM for large-scale UAV images.
In the literature, existing methods for retrieving match pairs can be divided into two categories, i.e., prior knowledge-based and visual similarity-based methods. The former depends on prior information, such as the sequential constraint in data acquisitions <cit.> or depends on prior data from onboard POS (Positioning and Orientation System) sensors <cit.> to calculate image ground footprints. Although these methods are very efficient, their usage is limited to the special configurations of data acquisition or depends on the precision of the prior data from used RS platforms. Without relying on other auxiliary data, visual similarity-based methods merely use images to calculate similarity scores between two images and determine overlapped match pairs by selecting images with the highest similarity scores. The most commonly used solution is CBIR (Content-Based Image Retrieval). The core idea of CBIR is to encode detected local features, e.g., SIFT (Scale Invariant Feature Transform) <cit.>, into high-dimension vectors, and the problem of retrieving match pairs is then cast as calculating the similarity score between two of these high-dimension vectors <cit.>. In the fields of photogrammetry and computer vision, vocabulary tree <cit.> based image retrieval has become the most classic method that converts local features into high-dimension BoW (Bag-of-Words) vectors <cit.>.
In vocabulary tree-based image retrieval, the similarity calculation uses an inverted index that establishes the relationship between visual words and corresponding local features <cit.>. However, building the inverted index is time-consuming for high-resolution and large-size UAV images. On the one hand, high-resolution UAV images lead to tens of thousands of local features from an individual image, which causes high computational costs in searching the nearest visual word via ANN (Approximate Nearest Neighbor) searching; on the other hand, large-volume UAV images requires an extremely large codebook to increase the discriminative ability of aggregated BoW vectors, which causes the millions of vector dimensions and further increases computational costs in ANN searching. In addition, the codebook is usually created offline from public datasets due to the high time costs of generating a large codebook. Thus, this study proposes an efficient and accurate solution for match pair retrieval. The core idea is to adopt a global descriptor for image representation and explore graph indexing for efficient ANN searching of high-dimension vectors. Our main contributions are summarized: (1) An individual codebook is trained online using random selection and scale restriction strategies to reduce image and feature redundancies. (2) Local features of each image are aggregated into a high-dimension global descriptor through a VLAD (Vector of Locally Aggregated Descriptors) aggregation that extremely reduces the number of features and the burden of nearest neighbor searching in image indexing. (3) VLAD descriptors are indexed into an HNSW (Hierarchical Navigable Small World) based graph structure for the ANN (Approximate Nearest Neighbor) searching, and match pairs are retrieved using an adaptive threshold selection strategy, which is used to create a view graph to for divide-and-conquer based parallel SfM reconstruction. (4) The performance of the proposed solution is verified by using large-scale UAV images and compared with other well-known software packages.
The structure of this study is organized as follows. Section <ref> gives a literature review of match pair retrieval and nearest neighbor searching. Section <ref>presents detailed procedures of the proposed match pairs retrieval algorithm and the workflow of the parallel SfM solution. Section <ref> conducts a comprehensive evaluation and comparison using UAV datasets. Finally, Section <ref> gives the conclusion of this study and improvements for future research.
§ RELATED WORK
This study focuses on match pair retrieval to improve the efficiency of SfM reconstruction. Thus, this section reviews match pair selection and nearest neighbor searching.
§.§ Prior knowledge-based methods
For photogrammetric data acquisition, there are usually two categories of prior knowledge, i.e., the configuration for data acquisition and the auxiliary data from onboard sensors. For the former, image match pairs are usually obtained according to the timestamp or data acquisition sequence <cit.>. According to this principle, Cheng et al. <cit.> proposed a strategy to connect sequential images for image localization and stereo-pair dense matching, which uses the optical images sequentially acquired by UAV to achieve the real-time 3D reconstruction of disaster areas. For the latter, image match pairs are usually obtained according to camera mounting angles or onboard POS (position and orientation system) data. Using the projection center of images, Rupnik et al. <cit.> searched the neighboring images close to the target image within the specified distance threshold. After acquiring the orientation data provided by the POS data of onboard navigation systems, image footprints on a specified elevation plane can be calculated, and image match pairs can be obtained through the pairwise intersection test between the image footprints <cit.>. In the work of <cit.>, ground coverages of images are calculated by using POS data, and image match pairs are determined by judging the intersection of ground coverages. Although these methods have high efficiency, their accuracy depends on the used prior knowledge.
§.§ Visual similarity-based methods
Compared with prior knowledge-based methods, these methods make match pairs selection using the images' content instead of prior knowledge. These methods can be grouped into two categories: the first is based on the number of matched correspondences, while the second uses the similarity score computed from image descriptors. For the former, two images are labeled as a valid match pair when the number of matches surpasses a threshold, such as the multi-scale strategy <cit.> and the preemptive matching strategy <cit.>. For the latter, images are quantified as descriptors, and the similarity score between two images is calculated as the distance between two descriptors. One of the most classic methods is vocabulary tree-based image retrieval <cit.>. Using a trained vocabulary tree, this method quantizes extracted local features into word frequency vectors, i.e., BoW (Bags-of-Words) vectors. The distance between the vectors represents the similarity score between the images <cit.>. These methods can quickly obtain correct match pairs on small datasets, which is inefficient for large-scale datasets. In addition to the above-mentioned methods, neural network-based methods have been proposed recently. Yan et al. <cit.> proposed a match pair selection method based on the GCN (Graph Convolutional Network) and used it to judge whether overlapping areas exist between images. This method performed remarkably well on challenging datasets from ambiguous and duplicated scenes. However, the efficiency is very low for high-resolution UAV images.
§.§ Nearest neighbor searching
NN searching aims to find the vectors closest to the query vector from a large set of database vectors. In the context of match pair selection, the NN searching in vocabulary tree-based image retrieval is solved as an ANN searching problem, which determines the efficiency of image retrieval. In the literature, existing ANN searching methods can be divided into three categories, i.e., tree-based methods, hashing-based methods, and graph-based methods. Tree-based methods use a tree structure to partition the searching space, and KD-Tree is one of the most well-known data structures <cit.>, which has been used extensively for image retrieval algorithms <cit.> and software packages, e.g., the COLMAP <cit.> and AliceVision <cit.>, because of the relative low dimension of used feature descriptors, such as the 128-dimension SIFT (Scale Invariant Feature Transform) descriptor. However, the efficiency of tree-based methods decreases dramatically for high-dimension vectors, which is not better than brute-force searching. To increase ANN searching efficiency, hashing-based methods convert continuous real-value vectors to discrete binary codes using hashing functions. In this category, LSH (Locality-Sensitive Hashing) attempts to hash similar vectors into the same cell with high probabilities <cit.>. Consequently, ANN searching can be executed in the cell that the query vector also falls in. Compared with the tree-based method, the hash operation reduces high-dimensional input vectors to low-dimensional terms by using a set of hash functions whose number is much smaller than the dimension of input vectors. This is useful to avoid the curse of dimensionality in tree-based methods. Due to their high efficiency, LSH-based methods have been used for large-scale image retrievals, such as web community and remote sensing images <cit.>. These methods, however, have lower precision caused by the usage of binary hashing encoding as well as high memory consumption to store hashing functions. In contrast to splitting the searching space, graph-based methods create a graph data structure to organize database vectors and achieve efficient ANN searching based on graph indexing. NSW (Navigable Small World) <cit.> and HNSW (Hierarchical NSW) <cit.> are two typical graph-based methods. NSW adopts an approximation of the Delaunay graph, which has the same operation for vertex insertion and query. NSW can achieve efficient and accurate searching based on long-distance edges that are created at the beginning, which forms a small navigable world and reduces the number of hops. HNSW is an improved version of NSW, which builds a multi-layer structure to speed up ANN searching. In the work of <cit.>, HNSW has been used to replace the KD-Tree in image retrieval, and good acceleration has been achieved in match pair selection. However, unacceptable time consumption is still required for processing large-scale UAV images due to a large number of local features.
§ METHODOLOGY
This study proposes an efficient and accurate match pair retrieval method for large-scale UAV images and implements a parallel SfM solution guided by the view graph constructed from retrieved match pairs. The core idea is to use global descriptors for image representation and explores a graph indexing structure for the ANN searching of high-dimension vectors. The workflow of the complete SfM reconstruction is shown in Figure <ref>, in which the inputs are UAV images without other auxiliary data. First, a codebook is trained online by selecting a subset of UAV images and scale-restricted features. Second, with the aid of the codebook, each image's local features are aggregated into a single high-dimension vector according to VLAD. Third, VLAD vectors are then indexed into an HNSW-based graph structure to achieve highly efficient ANN searching, and match pairs are retrieved based on the HNSW index and refined by using an adaptive selection strategy. Finally, after feature matching guided by the retrieved match pairs, a weighted view graph is constructed, which is used for the scene partition and parallel SfM reconstruction of large-scale UAV images.
§.§ Vocabulary tree-based image retrieval
Vocabulary tree-base image retrieval mimics the text retrieval that encodes a document as a feature vector by using trained words and casts document searching as the distance calculation between feature vectors <cit.>. The most important techniques are the inverted file for the word-image indexing and the TF-IDF (Term Frequency and Inverse Document Frequency) for the weighting of similarity scores <cit.>.
The workflow of vocabulary tree-based image retrieval consists of four major steps. First, local features with descriptors, e.g., SIFT, are extracted from training images; second, a vocabulary tree is hierarchically built from the extracted descriptors by using a clustering algorithm, e.g., the K-means, whose leaf nodes indicate the generated visual words; third, all images are indexed by searching the nearest visual word for all extracted feature descriptors, and an inverted file is simultaneously built for each visual word, which builds the indexing relationship between visual words and image features; finally, the same indexing operation is executed for an input query image, and the similarity score between the query and database images can be calculated by using their corresponding BoW vectors. Suppose there is a vocabulary with V words, and each image can be represented by a BoW vector v_d=(t_1,...,t_i,...,t_V). The component t_i is calculated according to Equation <ref>
t_i=n_id/n_dlogN/N_i
where n_id and n_d indicate the occurrence number of the word i in the image d and the total number of words in image d, respectively; N_i is the number of images that contain word i, and N is the total number of images in the database. The component t_i includes two parts, i.e., the term frequency (TF) n_id/n_d and the inverse document frequency (IDF) log(N/N_i), which indicate the occurrence frequency of the word i in the image d and the importance of word i among database images. After generating the BoW vectors, the similarity score of any two images can be quantified as the dot production of corresponding BoW vectors.
With the increasing of involved database images, vocabulary tree-based image retrieval efficiency decreases dramatically. The main reason is building the inverted index. On the one hand, the high resolution of UAV images leads to a large number of extracted features that cause high computational costs in the ANN searching to build the inverted file; on the other hand, with the increasing of database images, a larger codebook with more visual words must be used to increase the discriminative power of BoW vectors, which further increases the burden in the ANN searching and subsequent similarity calculation. Therefore, considering these issues, this study proposes an efficient image retrieval solution that combines the VLAD descriptor and the HNSW indexing. The former aggregates local feature descriptors into a high-dimensional global vector using a very small codebook, which avoids the high computational costs in image indexing; the latter is utilized to accelerate the ANN searching for high-dimensional VLAD vectors. This study would integrate the proposed solution with a parallel SfM workflow for large-scale image orientation. The details are described in the following sections.
§.§ Codebook generation considering image and feature redundancy
Local features are first detected from UAV images as training data. In recent years, UAV images have been capable of recording building facades and observing ground targets from multi-view directions. Due to the large differences in viewing directions and the obvious changes in illuminations and scales, feature matching becomes non-trivial for oblique UAV images <cit.>. Considering the issues of oblique UAV images, the SIFT algorithm extracts local features. In this study, to balance the accuracy and efficiency of subsequent match pair selection, 8,129 local features with the highest scales are extracted for each image, and the feature descriptors are represented as a vector with dimension = 128.
By using extracted local features, a codebook can be generated for the aggregation of local features to the VLAD descriptor. In general, there are two ways to generate a codebook, i.e., one for online generation for each dataset and the other for offline generation for all datasets. While the second way accelerates online processing without training an individual codebook, it cannot represent the characteristics of specified datasets and provide inferior performance on image retrieval. Therefore, the optimal way is to generate a codebook for an individual UAV dataset <cit.>. However, it would be very time-consuming to generate a codebook because the large data volume and high spatial resolution of UAV images cause many descriptors. For UAV images, there are two kinds of redundancy. The first is the image redundancy due to the high overlap degree to ensure the success of subsequent image orientation; the second is the feature redundancy because of the high spatial resolution of UAV images. These two kinds of redundancy could be exploited to reduce the descriptor number in codebook training. On the one hand, the number of visual words for VLAD aggregation is extremely less than that for BoW indexing <cit.>. A very coarse quantization of the descriptor space is required for VLAD aggregation. On the other hand, the characteristics of one image can be represented by a subset of features with large scales. Thus, this study proposes a random sampling strategy to select a subset p of training images and a scale restriction strategy to select a subset h of descriptors with large scales. Based on the work <cit.>, the parameter p and h are set as 20% and 1500.
After selecting the training descriptors, the codebook with k clusters is generated by using the K-means clustering algorithm <cit.>: 1) pick k cluster centers randomly; 2) assign each descriptor to its nearest cluster center; 3) calculate the mean vector of each cluster and use it as the new cluster centers; 4) repeat steps 2) to 3) after a certain number of iteration times or reach the convergence condition of the algorithm. Based on the clustering algorithm, the k cluster centers indicate the codebook C={c_1,c_2,c_3,...,c_k}. The number of cluster centers k is closely related to the performance of the match pair retrieval algorithm. On the one hand, the accuracy of match pair retrieval will be reduced when k is too small; on the other hand, the generation of the codebook will consume more memory, and the efficiency of subsequent feature aggregation and image retrieval will be reduced when k is too large. Thus, a proper k is significant for match pair retrieval.
§.§ Adaptive match pair retrieval via global descriptor and graph indexing
§.§.§ Global descriptor from the aggregation of local features
Some solutions are designed for aggregating local features to global vectors, e.g., the BoW that counts the term frequency of words. However, the number of words in the trained codebook should increase simultaneously with the number of involved images. It would cause high time costs for large-scale image indexing. Instead of the term frequency counting, VLAD accumulates residuals between local feature descriptors and their corresponding cluster centers and achieves high discriminative power using a very small-size codebook. Based on the observation, this study uses VLAD to aggregate local features into global descriptors <cit.>.
For N extracted local features of an image, the VLAD descriptor is obtained by iterating feature descriptors assigned to the same cluster center and calculating the sum of the residuals between these feature descriptors and the cluster center. The final VLAD descriptor is a concatenation of residual vectors generated from all cluster centers. Supposing that there are k cluster centers in the trained codebook C, the VLAD descriptor v consists of k vectors with the same dimension d=128 as the used SIFT descriptor. Therefore, the calculation of an element v_k,j in the VLAD descriptor v is presented by Equation <ref>
v_k,j=∑_i=1^Na_k(d_i)(d_i(j)-c_k(j))
where j is the dimension index of feature descriptors, i.e., j=1,2,...,d; a_k(d_i) is an indicator function: when the feature descriptor d_i belongs to the visual word c_k, a_k(d_i)=1; otherwise, a_k(d_i)=0. Based on the formulation, an image is represented as a k× d VLAD descriptor. Compared with the BoW vector, the VLAD descriptor uses the residual vector to encode the input image. In order to generate the same dimension feature vector, extremely fewer visual words are required in the trained codebook, i.e., the ratio is the same as the dimension d=128 of the used descriptors. Besides, component-wise and global L2-normalization is sequentially conducted for the generated VLAD descriptors. Noticeably, the VLAD aggregation can be executed parallelly because it is independent for each clustering center.
§.§.§ Match pair retrieval based on Graph-indexed global descriptors
Match pairs can be selected by the nearest neighbor searching between VLAD descriptors. Recently, graph-based solutions have attracted enough attention because of their high precision and promising efficiency when dealing with high-dimension descriptors. HNSW <cit.> is one of the well-known graph-based search algorithms, which is implemented based on the NSW (Navigable Small World) search method <cit.>. HNSW uses a hierarchical structure to build a vector index graph to increase retrieval efficiency, miming a coarse-to-fine searching strategy. The bottom layer includes all vertices, and the number of vertices decreases gradually from the bottom to up layers. In the retrieval stage, after the entry of the query vector, the HNSW index is used to search from top to bottom, which restricts the searching of the next nearest neighbor to the child nodes in the next layer. The nearest neighbors in the bottom layers are the retrieval results. Thus, HNSW is used in this study for high-dimension multi-VLAD vector indexing and match pair retrieval. The VLAD descriptors are first constructed into a graph structure G={V, E}, in which V and E respectively represent the vertex set composed of VLAD descriptors and the edge set composed of their connection relationships. To achieve efficient indexing and retrieval, the maximum number of connections for each vertex is restricted to M, termed the friend number. This parameter M influences the efficiency and precision of image retrieval.
In match pair retrieval, the number of returned items should be specified well. The optimal value should adapt to the data acquisition configuration, mainly affected by the image overlap degree. It varies for each data acquisition and each UAV image. However, it is usually set as a fixed number or ratio in the classical image retrieval pipeline. In this study, an adaptive selection strategy has been adopted to select the number of retrieved images <cit.>. The core idea origins from the fact that images with larger overlap areas have higher similarity scores, and the similarity scores decrease dramatically with the decrease of overlap areas. However, image pairs without overlap areas have very small similarity scores, and at the same time, no obvious changes are observed from similarity scores, as illustrated in Figure <ref>. Thus, the distribution of similarity scores is fitted well by using a power function with coefficients a and b, as presented by Equation <ref>
y=a^*x^b
where x and y indicate the image ids and similarity scores, respectively. Using the mean μ and standard deviation δ of similarity scores between one query and database images, a horizontal separation line y=μ+kδ can be defined, and database images with similarity scores above the separation line are labeled as the retrieval results. Noticeably, in the HNSW-based image retrieval, the Euclidean distance instead of the similarity score has been returned. In this study, inverse linear normalization is used to calculate similarity scores. Suppose that m items are retrieved with distance D={d_1,d_2,d_3,...,d_m}, the similarity score is calculated based on Equation <ref>
s_i=d_max-d_i/d_max-d_min
where d_min and d_max indicate the minimal and maximal values in D, respectively. Thus, this equation converts the Euclidean distance to the similarity score that ranges from 0 to 1. Besides, the separation line y=μ+kδ is mainly influenced by the mean μ and standard deviation δ. With the increase of used samples to fit the power function, the separation line y would go down and retain more retrieved results. Thus, according to practical experiences, the number of used samples is set as 300 in this study.
§.§ View graph construction from retrieved match pairs
False match pairs inevitably exist because of repetitive image patterns and non-optimal parameters in image retrieval. In this study, local feature matching and geometric verification are conducted to filter false matches. Guided by initial match pairs, local feature matching is performed by finding the nearest neighbors from two sets of features based on the Euclidean distance between feature descriptors, in which the cross-checking and ratio test have also been utilized. To further refine the initial matches, the epipolar geometry based on the Fundamental matrix is utilized to remove false matches, which can be robustly estimated in the RANSAC (Random Sampling Consensus) framework <cit.>. Finally, the match pairs with the number of refined matches greater than 15 are retained.
A view graph can be created using the retained match pairs and their feature matches. In this study, the view graph is represented as an undirected weighted graph G={V, E}, in which V and E indicate the vertex set and edge set, respectively <cit.>. Suppose that I={i_i} and P={p_ij} are respectively n images and m match pairs. The graph G is constructed as follows: a vertex v_i is added for each image i_i and all vertices form the vertex set V={v_i}; adding an edge e_ij connecting vertex v_i and vertex v_j for each matched pair p_ij and all edges form the edge set E={e_ij}. To quantify the importance of match pairs, an edge weight w_ij is assigned to the edge e_ij. In the context of SfM-based image orientation, the number of feature matches and their distribution over image planes directedly influence the overall performance. Thus, w_ij is calculated by Equation <ref>
w_ij=R_ew× w_inlier+(1-R_ew)× w_overlap
where R_ew is the weight ratio between w_inlier and w_overlap, which is set as 0.5 similar to the work in <cit.>. w_inlier is the weight item related to the number of feature matches; w_overlap is the weight item related to the distribution of feature matches. These two items are calculated respectively according to Equations <ref> and <ref>
w_inlier = log(N_inlier)/log(N_max_inlier)
w_overlap=CH_i+CH_j/A_i+A_j
where N_inlier and N_max_inlier indicate the number of matched correspondences of the match pair and the maximum number of matched correspondences among all match pairs; CH_i and CH_j represent the convex hull areas of feature matches over two images; A_i and A_j represent the areas of two image planes. In our study, the Graham-Andrew algorithm <cit.> is used to detect convex hulls of feature matches.
§.§ Parallel SfM reconstruction guided by view graph
In this study, an incremental SfM is used to estimate camera poses and scene structures. Incremental SfM, however, suffers the problem of low efficiency due to the sequential registering of images and iterative local and global bundle adjustment. For large-scale scenes, this issue becomes very obvious and limits the applications of SfM in recent photogrammetric systems. To overcome the problem, this study adopts the divide-and-conquer strategy to split the large-size reconstruction into small-size sub-reconstructions. Thus, sub-reconstructions can be well addressed, and parallel techniques can also be utilized to improve efficiency. Figure <ref> illustrates the basic principle of the designed parallel SfM solution <cit.>, which includes four major steps described as follows:
* First, after creating the view graph G, the scene is divided into small-size clusters {G_i} with strong inner connections. The scene clustering is implemented through the NC (Normalized Cut) algorithm <cit.>, which removes the edges with smaller weights and ensures the good connection of vertices in each cluster.
* Second, an incremental SfM engine is then executed parallelly for each cluster G_i, which generates an individual model for each cluster. In this study, the well-known incremental SfM engine, COLMAP <cit.>, has been utilized to implement the parallel reconstruction of each cluster.
* Third, cluster merging is performed by iteratively merging two sub-models, which convert individual models to an entire model in the same global coordinate system. In this step, the merging order is critical as it affects the robustness and precision of cluster merging. In this study, the number of common 3D points between models is used to sort the merging order, which has been calculated efficiently through a corresponding graph established between two clusters <cit.>.
* Finally, a final global bundle adjustment is executed for the merged global model. Since the number of optimization parameters would be very large, a tie-point selection strategy is adopted to decrease the number of 3D points in BA optimization. As documented in <cit.>, tie-point selection is achieved based on four metrics, i.e., re-projection error, overlap degree, image coverage, and number limitation.
§.§ Algorithm implementation
This study implements the solution of match pair retrieval and parallel SfM reconstruction using the C++ programming language, as presented in Algorithm <ref>. In detail, for feature extraction, the SIFTGPU <cit.> library is used with default parameter setting; for the generation of the codebook, the Lloyd’s K-means cluster algorithm <cit.> has been used; in addition, we have implemented an algorithm for the aggregation of SIFT features into VLAD descriptors and adopted the HNSW algorithm in the FAISS package <cit.> for graph indexing; based on our previous work <cit.>, we have embedded the match pair retrieval and view graph construction method into the parallel SfM workflow, in which the software package ColMap <cit.> has been selected as the incremental SfM engine.
§ EXPERIMENTS AND RESULTS
In the experiment, three UAV datasets have been collected to evaluate the performance of the proposed solution. First, according to the efficiency and precision of match pair selection, we analyze the influence of key parameters, i.e., the number of cluster centers k for the codebook generation and the maximum number of neighboring vertices M in HNSW. Second, we conduct the match pair selection and SfM-based 3D reconstruction of the three UAV datasets using the selected parameter setting. Third, we compared the proposed SfM solution with four well-known software packages, i.e., two open-source software packages ColMap <cit.> and DboW2 <cit.> and two commercial software packages Agisoft Metashape and Pix4Dmapper, to evaluate the performance of match pair selection and SfM reconstruction. In the study, all experiments are executed on a Windows desktop computer with 64 GB memory, four Intel 2.40 GHz Xeon E5-2680 CPUs, and one 10 GB NVIDIA GeForce RTX 3080 graphics card.
§.§ Test sites and datasets
Three UAV datasets with different sizes are used for the performance evaluation. Figure <ref> shows the sample images in each dataset, and the detailed information is listed in Table <ref>. The description of each dataset is presented as follows:
* The first dataset consists of 3,743 images taken from a university campus covered by dense and low-rise buildings. The dataset is captured by a DJI Phantom 4 RTK UAV equipped with one DJI FC6310R camera. The images with 5,472 by 3,648 pixels are collected under the flight height of 80 m, and the GSD (Ground Sample Distance) is approximately 2.6 cm.
* The second dataset includes 4,030 images taken from a complex university building. It is captured using a DJI M300 RTK UAV equipped with one DJI Zenmuse P1 camera with a dimension of 8,192 by 5,460 pixels. It is worth mentioning that this dataset has been collected based on the optimized views photogrammetric <cit.>, which adjusts camera viewpoints and directions according to the geometry of ground objects. The GSD is approximately 1.2 cm. For absolute orientation, 26 GCPs (Ground Control Points) were collected using a total station, whose nominal accuracy is about 0.8 and 1.5 cm in the horizontal and vertical directions.
* The third dataset is recorded by a penta-view oblique photogrammetric instrument equipped with five SONY ILCE 7R cameras with 6,000 by 4,000 pixels. Low-rise buildings and dense vegetation mainly cover this test site. In addition, a rive comes across the test site. Under the flight height of 87.1 m, a total number of 21,654 images has been collected with a GSD of 1.21 cm.
§.§ The influence of parameters K and M
For the proposed match pair retrieval solution, two critical parameters directly influence the efficiency and precision of image indexing and retrieval, i.e., the visual word number k in the generation of the trained codebook and the friend number M in the graph-based indexing. The former determines the dimension of the VLAD vectors; the latter determines the maximum number of connections of each vertex to others in the HNSW graph. Thus, this section analyzes their influence on retrieval efficiency and precision.
For the evaluation, dataset 1 has been selected, and two metrics are used for performance evaluation: retrieval efficiency and precision. The retrieval efficiency is the total time costs consumed in match pair selection; the retrieval precision is calculated as the ratio between the number of correct match pairs and the number of all match pairs. In this test, the retrieval time includes time costs in VLAD-based feature aggregation, HNSW-based graph construction, and image retrieval. To avoid the influence of the adaptive selection, the retrieval number is fixed as 30, and match pairs with at least 15 true matches are defined as positive results.
For the analysis of the parameter k, the values of 32, 64, 128, 256, 512, and 1024 are tested. Figure <ref> presents the statistical results of efficiency and precision in the match pair selection, in which Figure <ref> and Figure <ref> respectively indicate the efficiency and precision. It is clearly shown that with the increase of k, the time costs increase exponentially, from 45.7 seconds to 175.5 seconds, with the value ranging from 32 to 1024, respectively. The main reason is that a larger k leads to more time costs in the nearest cluster center searching for VLAD feature aggregation and increases the dimension of generated VLAD descriptors, which further poses a burden in HNSW graph indexing and retrieval. On the contrary, we can observe that the retrieval precision increases linearly with the increase of the parameter k, which increases from 0.81 to 0.94 within the specified span. To balance efficiency and precision, the parameter k is set as 256 in the following tests.
For the analysis of the parameter M, the values of 6, 8, 10, 12, 16, 32, and 64 are used, and the statistical results are presented in Figure <ref>. We can see that: (1) the changing trend of retrieval efficiency in Figure <ref> can be divided into two parts. In the first part, the retrieval efficiency is almost constant with the value M increasing from 6 to 16; in the second part, the retrieval efficiency decreases dramatically with the value M increasing from 16 to 64; (2) the changing trend of retrieval precision in Figure <ref> can be separated into three stages. In the first stage, the retrieval precision increases obviously with the value M increasing from 6 to 8; in the second stage, the retrieval precision keeps constant within the value range from 8 to 16; in the third stage, the retrieval precision decreases gradually within the value range from 16 to 64. It is worth mentioning that k has a greater impact on retrieval efficiency than M because most time costs are spent in VLAD aggregation. Besides, M affects the number of valid NN neighbors that can be retrieved. Considering that at least 300 valid NN neighbors should be retrieved in the adaptive selection, the parameter M is set as 32 in the following tests.
§.§ Match pairs selection and 3D reconstruction
§.§.§ Match pairs selection by the proposed retrieval method
By using the selected parameters k and M, the performance of match pair selection is first evaluated. Similarly, retrieval efficiency and precision are used as the metrics for performance evaluation. Table <ref> lists the statistical results of match pair selection. It is clearly shown that high retrieval precision has been achieved for the three datasets, which are 90.1%, 89.9%, and 94.4% for the three datasets, respectively. It ensures that a very large proportion of selected match pairs are overlapped images. Figure <ref> shows the results of our method to retrieve similar images for two sample images from datasets 1 and 3. It can be seen that all the retrieved images are true positive results. In addition, the time costs of match pair selection are 2.5 mins, 2.6 mins, and 12.4 mins for the three datasets, respectively, which achieves the average time costs of approximately 0.040 secs, 0.039 secs, and 0.034 secs for match pair selection. Thus, we can conclude that the proposed solution can achieve linear time complexity in image indexing and retrieval and process large-scale UAV datasets for efficient match pair selection.
§.§.§ Parallel 3D reconstruction guided by the weighted view graph
The selected match pairs are then used to guide feature matching. In this study, feature matching is achieved by searching approximate nearest neighbors, refined based on the widely used ratio test and cross-checking. The initial matches are then verified by the epipolar constraint implemented by the estimation of the fundamental matrix within the framework of RANSAC. In this study, the threshold of ratio-test is set as 0.8 as the default value in the SIFTGPU library, and the maximum distance threshold is configured as 1.0 pixels to ensure the high inlier ratio of feature matching. Using feature matching results, a view graph represented as an undirected weighted graph can be constructed for each dataset, whose vertices and edges represent images and their connection relationships, respectively. As presented in Figure <ref>, three view graphs are created for the three UAV datasets, in which vertices and edges are rendered by red dots and gray lines, respectively. It is shown that there are 59,014, 65,743, and 353,005 match pairs selected from the three datasets, respectively. The dense edges between vertices indicate a strong connection between images, which ensures the success of SfM-based image orientation.
To achieve the parallel SfM reconstruction, the entire view graph is then divided into small sub-clusters with strong inner-edge connections. In the proposed parallel SfM workflow, the normalized cut algorithm is utilized for scene clustering, and the largest size of each sub-cluster is set as 500. The scene partition results are illustrated in Figure <ref>, Figure <ref>, and Figure <ref>. We can see 8, 9, and 44 sub-clusters generated for the three datasets. Each cluster is represented by an identical color, which verifies the compact connections within each cluster. Based on the sub-clusters, parallel SfM is executed to create the sub-reconstructions that are finally merged into the entire reconstruction. Table <ref> shows the statistical results of 3D reconstruction, in which the metrics precision and completeness refer to the re-projection error of BA optimization and the numbers of oriented images and reconstructed 3D points. We can see that the precision of the three datasets are 0.542 pixels, 0.668 pixels, and 0.752 pixels, respectively, and almost all images are oriented successfully, whose numbers are 3,724, 4,029, and 21,568, respectively. For the visualization, Figure <ref>, Figure <ref>, and Figure <ref> shows the reconstructed 3D points from the three datasets. It is shown that the reconstructed 3D points can cover the whole test site. Thus, the proposed solution can create stable view graphs to achieve parallel SfM.
§.§ Performance comparison with the other software packages
§.§.§ Match pair selection
The proposed solution is compared with the BoW retrieval method in ColMap and the Dbow2 retrieval method to evaluate the performance in match pair selection. The statistic result is presented in Figure <ref>. It is clearly shown that compared with BoW and Dbow2, the proposed solution achieves the highest efficiency, whose time costs are 2.5 min, 2.6 min, and 12.4 min for the three datasets. Especially for dataset 3, the time costs of Bow and Dbow2 reach 1335.5 mins and 2848.3 mins, respectively, which is unacceptable in practice. By observing the results presented in Figure <ref>, we can see that BoW almost achieves the highest precision, which is 90.3%, 92.1%, and 97.6% for the three datasets, respectively. The proposed solution ranks second with a precision of 90.1%, 89.9%, and 94.4% for the three datasets, which are higher than Dbow2. In conclusion, compared with BoW, the proposed solution can achieve comparable precision with the speedup ratios ranging from 36 to 108 for the three UAV datasets.
§.§.§ SfM-based reconstruction
To evaluate the performance in the workflow of SfM-based reconstruction, the proposed solution is further compared with two commercial software packages Agisoft Metashape and Pix4Dmapper. Agisoft Metashape uses multi-scale matching and GNSS data for match pair selection; Pix4Dmapper provides a vocabulary tree-based image retrieval. In this test, camera intrinsic parameters are calibrated and fixed in SfM, and the match pairs selected from Bow and Dbow2 are fed into the proposed parallel SfM for reconstruction. Besides, 26 GCPs in the second dataset are used to evaluate geo-referencing accuracy. In the following tests, the metric efficiency indicates the time costs in SfM reconstruction without feature matching.
Table <ref> presents the statistical results of SfM reconstruction without GCPs. It is shown that BoW, Dbow2, and the proposed solution have almost the same efficiency because of using the same SfM engine. Although Metashape and Pix4Dmapper can achieve the reconstruction of datasets 1 and 2, their efficiency is lower, which further verifies the advantage of the parallel SfM workflow. Noticeably, Metashape and Pix4Dmapper fail to reconstruct dataset 3 since the large data volume causes the out-of-memory error in reconstruction. Considering the metric precision, it is shown that Pix4Dmapper achieves the highest performance, which BoW, Dbow2, and the proposed solution follow. For metric completeness, we can see that comparable performance can be observed from the evaluated software packages except for Pix4Dmapper. This is mainly caused by the relatively low precision of image retrieval.
Absolute bundle adjustment with GCPs is further executed to evaluate the geo-referencing accuracy of reconstructed models. In this test, three GCPs that are evenly distributed over test site 2 are utilized for the geo-referencing of SfM reconstructed models, and the others are used as check points (CPs). For the performance evaluation, two metrics, i.e., mean and std.dev. of CPs residuals are used in this test. In addition, Pix4dMapper has been selected as a baseline for commercial software packages.
Table <ref> presents the statistical results of absolute BA. It is shown that among all evaluated software packages, Pix4dMapper achieves the highest accuracy with the std.dev. of 0.013 cm, 0.016 cm, and 0.019 cm in the X, Y, and Z directions, respectively. Although BoW ranks second in the vertical direction with the std.dev. of 0.036 cm, its horizontal accuracy is lower than the proposed solution with the std.dev. of 0.029 cm and 0.026 cm in the X and Y directions, respectively, which can also be verified by the residual plot presented in Figure <ref> and Figure <ref>. Due to the low precision of match pair selection, the geo-referencing accuracy of Dbow2 is the lowest in the X and Z directions, as shown in Figure <ref> and Figure <ref>. Thus, we can conclude that the proposed solution can provide necessary and accurate match pairs to achieve reliable SfM reconstruction with obviously high efficiency.
§ CONCLUSIONS
In this paper, we proposed a workflow that integrates match pair retrieval and parallel SfM reconstruction to achieve the efficient and accurate 3D reconstruction of large-scale UAV images. The core idea of match pair selection is to aggregate many local features into high-dimensional global vectors that can then be indexed through a graph-based structure for efficient ANN searching. Guided by the selected match pairs, a weighted view graph is created to achieve the parallel SfM through graph clustering and sub-model merging. The tests demonstrate that the proposed workflow can significantly accelerate match pair selection with a speedup ratio of tens and hundreds of times and increase the efficiency of SfM-based reconstruction with comparative results.
In this study, some observations and possible limitations have also been observed. First, the precision of match pair selection is dramatically influenced by the number of words in the codebook generated through K-means clustering, as shown in Section <ref>. At the same time, a large K would also decrease the image retrieval efficiency. Thus, it is non-trivial to trade precision and efficiency, especially for large-scale datasets. Second, the hand-crafted local features, i.e., SIFT, are adopted for image retrieval of their high tolerance to scale and viewpoint changes. However, deep learning-based feature detectors have attracted enough attention in the fields of image retrieval <cit.> and feature matching <cit.> due to the excellent ability of representation learning. Therefore, it is rational to use learned descriptors to enhance the image retrieval and feature matching algorithm in the proposed workflow. Third, only the CPU is used in the implemented algorithm, which can be further accelerated using the GPU parallel computing technique. In future research, we will conduct more tests on selecting high-quality match pairs with high efficiency by exploiting learned feature descriptors and the GPU acceleration technique.
§ ACKNOWLEDGMENTS
This research was funded by the National Natural Science Foundation of China (Grant No. 42001413), the Open Research Fund from the Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) (Grant No. GML-KF-22-08), the Open Research Project of The Hubei Key Laboratory of Intelligent Geo-Information Processing (Grant No. KLIGIP-2021B11), and the Provincial Natural Science Foundation of Hunan (Grant No. 2023JJ30232).
IEEEtran
*
|
http://arxiv.org/abs/2307.04665v1 | 20230710160305 | Peculiarities of beta functions in sigma models | [
"Oleksandr Gamayun",
"Andrei Losev",
"Mikhail Shifman"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech"
] | |
http://arxiv.org/abs/2307.05983v1 | 20230712075935 | The Horton-Strahler number of Galton-Watson trees with possibly infinite variance | [
"Robin Khanfir"
] | math.PR | [
"math.PR"
] |
[
Philipp H. Kindt
August 12, 2023
====================
The Horton-Strahler number, also known as the register function, provides a tool for quantifying the branching complexity of a rooted tree. We consider the Horton-Strahler number of critical Galton-Watson trees conditioned to have size n and whose offspring distribution is in the domain of attraction of an α-stable law with α∈ [1,2]. We give tail estimates and when α≠ 1, we prove that it grows as 1/αlog_α/(α-1)n in probability. This extends the result in Brandenberger, Devroye & Reddad <cit.> dealing with the finite variance case for which α = 2. We also characterize the cases where α = 1, namely the spectrally positive Cauchy regime, which exhibits more complex behaviors.
Keywords Horton-Strahler number · Register function · Galton-Watson trees · tail estimates · stable laws
Mathematics Subject Classification 60C05 · 60J80 · 60F05 · 05C05 · 60E07
§ INTRODUCTION
The Horton-Strahler number of a finite rooted tree is an integer that quantifies its branching complexity. One possible formal definition is given recursively as follows.
Let t be a finite rooted tree. Its Horton-Strahler number (t) is defined as follows.
(a) If t reduces to a single node, then (t)=0.
(b) Otherwise, (t) is the maximum of the Horton-Strahler numbers of the subtrees t_1,...,t_k that are attached to the root, plus one if that maximum is not uniquely achieved. Namely,
(t)=max_1≤ i≤ k(t_i)+_{#_1≤ i≤ k(t_i)≥ 2 } .
□
Alternatively, (t) is also the height of the largest perfect binary tree that can be embedded into t (see Section <ref> for more details). In this article, we provide estimates on the Horton-Strahler number of critical Galton-Watson trees conditioned to be large. Before discussing our results precisely, let us provide a brief history with general references on related topics.
Background. The Horton-Strahler number was introduced independently by the two hydrogeologists Horton <cit.> and Strahler <cit.> to obtain quantitative empirical laws about river systems, that are represented by trees whose leaves are springs and whose root corresponds to the outlet of the basin. Many key physical characteristics of stream networks have been since modeled with the help of this number: see e.g. Peckham <cit.>, Fac-Beneda <cit.>, Chavan & Srinivas <cit.> and Bamufleh et al. <cit.>. The Horton-Strahler number appears independently in other scientific fields (anatomy, botany, molecular biology, physics, social network analysis, etc). In computer science, it is used
to optimize the amount of memory or time needed to manipulate data structures. It is sometimes called the register function in this context because the minimum number of registers needed to evaluate an arithmetic expression A is equal to the Horton-Strahler number of the syntax tree of A. We refer to Viennot <cit.> for a survey on those various applications.
The Horton-Strahler number is encountered in many areas of mathematics: see for instance Esparza, Luttenberger & Schlund <cit.> for connections with mathematical logic, formal language theory, algebra, combinatorics, topology, approximation theory, and more. In the probability area, let us mention Kovchegov & Zaliapin <cit.>, which
considers the Horton-Strahler number through the prism of pruning operations on trees.
Here, we rather focus on probabilistic works that discuss the Horton-Strahler number of uniform samples of standard families of combinatorial trees. Flajolet, Raoult & Vuillemin <cit.> and Kemp <cit.>
consider the Horton-Strahler number of a uniform random ordered rooted binary tree T_n with n leaves (a uniform n-Catalan tree) and they prove that
[ (T_n)]=log_4 n+D(log_4 n)+o(1)
as n→∞, where log_b x=ln x/ln b stands for the logarithm of x to the base b, and D is a 1-periodic continuous function. Here, all the random variables that we consider are defined on the same probability space (Ω, ℱ, ) whose expectation is denoted by . In particular, (T_n) is subject to deterministic oscillations. Moreover, Devroye & Kruszewski <cit.> proved that (T_n) is highly concentrated around its expected value via exponential tail estimates. These results were extended to k-ary trees by Drmota & Prodinger <cit.>.
More recently, Brandenberger, Devroye & Reddad <cit.> showed that the Horton-Strahler number of a critical Galton-Watson tree with finite variance offspring distribution conditioned to have n vertices always grows as log_4 n in probability, which extends all the results that have been previously obtained on first-order behavior. In a companion paper <cit.>, we go further and we study, among other things, the fluctuations and deterministic oscillations of the Horton-Strahler number of large Catalan trees.
Framework and main results.
Let us give a precise overview of the present article which provides tail estimates and characterizes the first-order behavior of the Horton-Strahler number of critical Galton-Watson trees conditioned to have size n and whose offspring distribution is in the domain of attraction of an α-stable law with α∈[1,2]. This framework extends the finite variance case,
where α= 2, and it includes the so-called spectrally positive Cauchy (or 1-stable) laws.
To that end, let us introduce our basic notations and assumptions.
Throughout this work, μ=(μ(k))_k∈ℕ stands for a probability measure on the set ℕ of nonnegative integers. We shall always assume μ to be non-trivial and critical, namely
μ(0)>0 and ∑_k∈ℕkμ(k)=1.
We view it as the critical offspring distribution of a (rooted and ordered) Galton-Watson tree denoted by τ, which is then almost surely finite. See Section <ref> for a formal definition. Several results are expressed in terms of the following functions
∀ s∈[0,1], φ(s)=∑_k∈s^kμ(k) and ψ(s)=φ(1-s)-(1-s),
that are strictly convex thanks to (<ref>).
Our first contribution consists in four propositions that connect (τ) to other simple characteristics of τ and that hold under the sole assumption that μ is non-trivial and critical. More precisely,
- Proposition <ref> provides an upper bound for the size #τ of τ when (τ) is small,
- Proposition <ref> provides a lower bound for the height |τ| of τ when (τ) is large,
- Proposition <ref> provides a lower bound for (τ) when the height |τ| of τ is large ,
- Proposition <ref> provides a lower bound for (τ) when the maximal out-degree of τ is large.
We will use these propositions to derive the order of magnitude of (τ) under ( · | #τ = n). These estimates are fairly general and they are sufficiently accurate to be interesting in their own right.
Our second goal is to estimate the tail of the Horton-Strahler number of Galton-Watson trees whose offspring distribution has possibly infinite variance. We work under the assumption that
μ belongs to the domain of attraction of a stable law.
We denote the scaling index of (the type of) the limiting law by α: since μ has a finite mean, we get α∈ [1, 2] and since μ is supported on [0, ∞), we only deal with spectrally positive stable laws: namely, their skewness parameter β is equal to 1 and the support of their Lévy measure is included in (0, ∞). By standard results, see e.g. <cit.>, Assumption (<ref>) is equivalent to the existence of a function L : [0, ∞) ↦ (0, ∞) that is slowly varying and such that
μ([n,∞))∼ n^-α L(n) if α∈ [1, 2) and ∑_k=0^n k^2μ(k) -1∼ 2 L(n) if α =2.
Note that ∑_k=0^n k^2μ(k) -1 is nondecreasing and ultimately positive since μ satisfies (<ref>) (we refer to Proposition <ref> for more details).
We first discuss our results in the cases where α∈ (1, 2], and then we consider the cases where α= 1 that feature more complicated behaviors.
The cases where α∈ (1, 2]. In these cases, our results are simply expressed in terms of the index α only. First of all, we prove in Proposition <ref> that if μ is in the domain of attraction of an α-stable law with α∈ (1, 2], then the tail of (τ) follows the universal exponential decay
-log_α/α-1((τ)≥ n)∼ n,
where log_b x= ln x / ln b stands for the logarithm of x to the base base b. This extends the work of Brandenberger, Devroye & Reddad <cit.> who proved the α=2 case under the assumption that μ has a finite variance.
We next discuss the behaviour of (τ) under ( · | #τ =n) and to that end, we assume that
μ is aperiodic
(namely that μ is not supported by a proper additive subgroup of ℤ) because this implies that (#τ=n)>0 for all n that are large enough. Then, we prove the following.
Assume that μ is critical and aperiodic and that it belongs to the domain of attraction of a stable law of index α∈(1,2]. Then, the following convergence holds in probability.
α(τ)/log_α/α-1 n under ( · | #τ = n)⟶ 1 .
This extends the work of Brandenberger, Devroye & Reddad <cit.> who proved the α = 2 case under the assumption that μ has a finite variance.
Our proof of Theorem <ref> relies on results in Duquesne <cit.> on the height of α-stable trees and
in Kortchemski <cit.> on the asymptotic behavior of the positive excursion of the random walk (W_n)_n∈ℕ. These results are recalled precisely in Section <ref>, Proposition <ref>.
Let us mention that (<ref>) holds true when μ is not aperiodic by restricting to the integers such that (#τ=n)>0.
The 1-stable cases. In these more complex cases, we need to specify a converging sequence of rescaled centered sums of μ-distributed independent random variables.
More precisely, we denote by (W_n)_n∈ℕ a (left-continuous)
random walk starting at W_0= 0, whose jump law is given by
(W_1= k)= μ (k+1), k∈{ -1}∪ℕ.
Note that [W_1] = 0. Then μ belongs to the domain of attraction of a 1-stable law if and only if there exists a (0, ∞)-valued
sequence (a_n)_n∈ℕ tending to ∞ such that
W_n+b_n/a_n X , where b_n = n [W_1 _{ |W_1| >a_n} ] and [ e^-λ X ] = e^λlnλ
for all n∈ and λ∈(0,∞). Let us mention here that a_n ∼ n L(a_n) and that b_n/ a_n →∞, necessarily.
The law of X is a spectrally positive Cauchy (or 1-stable) law.
Its Fourier transform is given by [ exp (iu X) ]= exp (- π2 |u| - iu ln |u| ), u∈ℝ. We refer to Proposition <ref> for details.
The asymptotic behavior
of (τ) in the 1-stable case is expressed in terms of the sequence (b_n)_n∈ℕ and the following function Υ that is derived from ψ in (<ref>) as follows.
∀ s∈ (0, 1), Υ(s)=∫_s^1 r/rlnΛ(r), where Λ(s)=sψ'(s)/sψ'(s)-ψ(s).
We refer to Section <ref> for more details on the definition of Υ.
Proposition <ref> asserts that if μ is in the domain of attraction of a 1-stable law,
then the tail of (τ) satisfies
Υ(((τ)≥ n))∼ n.
In contrast to the cases where α∈ (1, 2] for which we can show that
Υ (s) ∼_0^+log_α/α-1 1/s, the asymptotic behavior of Υ when α= 1 depends on the slowly varying function L appearing in (<ref>): see Proposition <ref> for a precise statement. As discussed in Examples <ref>, the following holds.
(𝐚) If L(n) ∼ (ln n)^-1-κ with κ∈ (0, ∞), then
Υ (s) ∼_0^+ln 1/s/lnln 1/s and -ln ((τ)≥ n)∼ n ln n.
(𝐛) If L(n) ∼exp (-(ln n)^κ) with κ∈ (0, 1), then
Υ (s) ∼_0^+1/1-κln 1/s/lnln 1/s and -ln ((τ)≥ n)∼ (1 -κ ) n ln n.
(𝐜) If L(n)∼exp(-ln n/lnln n) then Υ (s)∼_0^+ln 1/s/lnlnln 1/s and
-ln ((τ)≥ n)∼ n lnln n.
When the Galton-Watson tree τ is conditioned to be large, the size of its Horton-Strahler number is of order
Υ(1/b_n ) as proved by the following theorem that first handles the case where τ is conditioned to have at least n vertices.
Assume that μ is critical and that it belongs to the domain of attraction of a 1-stable law.
Then, the following convergence holds in probability.
(τ)/Υ(1/b_n) under ( · | #τ≥ n)⟶ 1,
where Υ is given by (<ref>) and (b_n)_n∈ℕ by (<ref>).
Our proof of Theorem <ref> and of Theorem <ref> below, rely on several results of Kortchemski & Richier <cit.> and of Berger <cit.> that specify the asymptotic behavior of positive excursion of the random walk (W_n)_n∈ℕ. They are recalled precisely in Section <ref>, Lemma <ref> and Propositions <ref> and <ref>.
As discussed by Kortchemski & Richier <cit.> and Berger <cit.>, it is not clear how one could control τ under ( · | #τ = n) by assuming only that μ([n,∞))∼ L(n)/n as in (<ref>).
Here, we work under the stronger assumption that μ(n)∼ L(n)/n^2, which implies the previous one and also implies that μ is aperiodic.
Assume that μ is critical and that there is a slowly varying function L such that
μ(n)∼L(n)/n^2.
Then, the following convergence holds in probability.
(τ)/Υ(1/b_n) under ( · | #τ = n)⟶ 1,
where Υ is given by (<ref>) and (b_n)_n∈ℕ by (<ref>).
As already mentioned, when α = 1, the rescaling sequence Υ(1/b_n) depends on the slowly varying function L appearing in (<ref>): as discussed in Examples <ref>, the following holds true.
(𝐚) If L(n) ∼ (ln n)^-1-κ with κ∈ (0, ∞), then
Υ(1/b_n) ∼ln n/lnln n.
(𝐛) If L(n) ∼exp (-(ln n)^κ) with κ∈ (0, 1), then
Υ(1/b_n) ∼1/1-κln n/lnln n.
(𝐜) If L(n)∼exp(-ln n/lnln n), then
Υ(1/b_n) ∼ln n/lnlnln n.
*Organisation of paper. In Section <ref>, we properly set our framework and we recall from previous works the tools that we use later on in the paper: Section <ref> is devoted to Galton-Watson trees and Section <ref> to known limit theorems for random walks and Galton-Watson trees.
In Section <ref>, we study the distribution of the Horton-Strahler number of Galton-Watson trees. In Section <ref>, we first establish new technical results on Horton-Strahler numbers (especially in Lemmas <ref> and <ref>). Section <ref> focuses on proving Propositions <ref>, <ref>, <ref>, and <ref> that link the Horton-Strahler number to size, height, and maximal out-degree of Galton-Watson trees. Section <ref> is devoted to the tail asymptotics of Galton-Watson trees whose offspring distribution belongs to the domain of attraction of a stable law: in particular, we prove Proposition <ref>, that is one of the main results of the paper and we discuss Examples <ref> (𝐚), (𝐛), and (𝐜). Section <ref> is devoted to the proof of Theorems <ref>, <ref> and <ref> and we discuss Examples <ref> (𝐚), (𝐛), and (𝐜), in the end of Section <ref>.
§ FRAMEWORK AND TOOLS
In this section, we recall a set of well-known results that are used in the rest of the article and in the proofs of
Theorems <ref>, <ref> and <ref>. With the exception of Lemma <ref>, this section contains no new result.
§.§ Galton-Watson trees
Rooted ordered trees. We recall Ulam's formalism on rooted ordered trees. Let ℕ^*={1,2,3,...} be the set of positive integers and let 𝕌 be the following set of finite words
𝕌=⋃_n∈ℕ(ℕ^*)^n
with the convention (ℕ^*)^0={∅}. The set of words 𝕌 is totally ordered by the lexicographic order denoted by ≤. Let u=(u_1,...,u_n)∈𝕌 and v=(v_1,...,v_m)∈𝕌, we write u*v=(u_1,...,u_n,v_1,...,v_m)∈𝕌 for the concatenation of u and v. We denote by |u|=n the height of u, and if n≥ 1 then we denote by u=(u_1,...,u_n-1) the parent of u. We also say that u is a child of v when u=v. The genealogical order ≼ is a partial order on 𝕌 defined by u≼ v⟺∃ u'∈𝕌, v=u*u'. When u≼ v, we will say that u is an ancestor of v. When u≼ v but u≠ v, we write u≺ v.
Finally, we write u∧ v∈𝕌 for the most recent common ancestor of u and v, that is their common ancestor with maximal height.
We say that a subset t of 𝕌 is a tree when the following is satisfied:
(a) ∅∈ t,
(b) for all u∈ t, if u≠∅ then u∈ t,
(c) For all u∈ t, there exists an integer k_u(t)∈ℕ such that u * (i) ∈ t ⟺ 1≤ i ≤ k_u(t).
We denote by 𝕋 the space of all trees. □
Let t∈𝕋. The size of t is simply the (possibly infinite) number # t of its vertices and we say that t is finite if # t< ∞. As a graph, the edges of t are given by the unordered pairs { u , u} for all u ∈ t\{∅}. Therefore the degree of u∈ t is k_u (t) +1 if u≠∅ and k_∅ (t) otherwise. Namely, k_u (t) is the out-degree of u (alternatively, if one views t as a family tree whose ancestor is ∅, then k_u(t) stands for the number of children of u). We use the following notations for the height of t and its maximal out-degree.
|t|=max_u∈ t|u| and Δ(t)=max_u∈ tk_u(t).
We also denote the subtree stemming from u∈ t and the tree pruned at u respectively by
θ_u t={v∈𝕌 : u*v∈ t} and _u t= t\{v∈ t : u≺ v}
Observe that θ_u t and _u t both belong to 𝕋.
Galton-Watson trees and the Many-To-One Principle.
Let us equip the set of trees with the sigma-field ()
generated by the sets { t ∈𝕋 : u ∈ t }, where u ranges in .
Formally, a random tree is a function τ : Ω→𝕋 that is
(, ())-measurable.
Let μ=(μ(k))_k∈ be a probability measure on . A Galton-Watson tree with offspring distribution μ (a (μ)-tree, for short) is a random tree τ that satisfies the following.
(a) k_∅ (τ) has law μ.
(b)For all k^* such that μ(k) > 0, the subtrees θ_(1)τ, … , θ_(k)τ under ( · | k_∅ (τ) = k) are independent with the same law as τ under . □
It is well-known that a (μ)-tree τ is almost surely finite if and only if its offspring distribution is subcritical or critical and non-trivial, namely if (<ref>) holds, and in that case for all finite tree t∈𝕋,
(τ = t)= ∏_u∈ tμ( k_u(t) ).
As observed by Kesten <cit.>, a critical
(μ)-tree conditioned to be large locally converges in law
to a tree τ_∞ with a single infinite line of descent and whose law can be informally described as follows:
all individuals of τ_∞ reproduce independently, the individuals of the infinite line of descent
reproduce according to the μ-size-biased distribution (kμ(k))_k∈ℕ whereas the others reproduce according to μ. More precisely, we introduce the following.
Let μ be a non-trivial critical offspring distribution. A size-biased (μ)-tree is a random tree
τ_∞ that satisfies the following.
(a) For all n∈ℕ, there is a unique u∈τ_∞ such that |u|= n and # (θ_u τ_∞)= ∞. We denote this vertex by U_n. Note that U_0= ∅ and that U_ n+1=U_n. Hence, there exists a ℕ^*-valued sequence of random variables (J_n)_n∈ℕ^* such that U_n is the word ( J_1, …, J_n) for all n∈ℕ^*.
(b) The random variables (k_U_n (τ_∞) , J_ n+1), for n∈ℕ, are independent and distributed as follows:
∀ j,k∈ℕ^*, (J_n+1=j ; k_U_n (τ_∞) =k)= _{ j≤ k }μ(k).
(c) Conditionally given (k_U_n(τ_∞),J_n+1)_n∈ℕ, the finite subtrees stemming from the infinite line of descent, which are the θ_U_n∗ (j)τ_∞ for j ∈{1,… ,k_U_n(τ_∞)}\{J_n+1} and n∈ℕ, are independent (μ)-trees. □
Note that the individual U_n+1 on the infinite line of descent has J_n+1-1 siblings strictly on the left hand side and k_U_n (τ_∞) -J_n+1 siblings strictly on the right hand side. Their joint law is given by the following bivariate generating function:
∀ s, r ∈ [0, 1], [r^J_n+1-1 s^k_U_n (τ_∞) -J_n+1]= φ (r) -φ (s)/r-s,
where φ is given by (<ref>) and where the quotient is equal to φ'(r) when r= s.
As mentioned above, size-biased trees are the local limits of critical Galton-Watson trees conditioned to be large and therefore appear in many results concerning the asymptotic behavior of branching processes: we refer to Lyons, Pemantle & Peres <cit.>, Aldous & Pitman <cit.> and Abraham & Delmas <cit.> for general results in this vein. One key tool involving size-biased (μ)-trees is
the so-called Many-To-One Principle, which is part of folklore (see e.g. Duquesne <cit.> for a proof) and which we use in our article in the following form.
Let τ be a (μ)-tree and let τ_∞ be a size-biased (μ)-tree. We keep the notations of Definition <ref>.
Then, for all n∈ℕ and for all bounded functions G_1:𝕋×𝕌⟶ℝ and G_2:𝕋⟶ℝ, it holds that
[∑_u∈τ_{ |u|=n}G_1(_uτ,u) G_2(θ_uτ)]=[ G_1(_U_nτ_∞,U_n)] [G_2(τ)].
Lukasiewicz path associated with a tree. We recall here a key combinatorial tool to study Galton-Watson trees via random walks.
Let t ∈𝕋 be finite and
let u(t)=(u_j(t))_0≤ j < # t be the sequence of its vertices listed in increasing lexicographic order:
u_0 (t)= ∅ < u_1(t) < … < u_j (t)< u_j+1 (t)< … <u_# t-1(t). The sequence u(t) is often called the depth-first exploration of t. We then define a ℤ-valued path W(t)= (W_j(t))_0≤ j ≤# t by setting
W_0(t) = 0 and
W_j+1 (t) = W_j(t) + k_u_j(t) (t) - 1
for all 0≤ j < #t, that is the Lukasiewicz path of t. □
In probability, Lukasiewicz paths originate from queuing systems theory to study the waiting line of a single server subject to the Last-In-First-Out policy and they have been used by Le Gall & Le Jan <cit.> to define Lévy trees. In the following lemma, we recall that Lukasiewicz paths are adapted processes that completely encode finite trees and that are particularly well-suited to study Galton-Watson trees (see e.g. Le Gall <cit.> and <cit.> for more details).
Let t∈𝕋 be finite. Let τ be a (μ)-tree where μ is non-trivial and critical. Then the following holds true.
(i) Lukasiewicz paths provide a one-to-one correspondence between the set of finite trees and the set of finite nonnegative excursions of left-continuous walks, which is defined by
⋃_n∈ℕ^*{ (w_j)_0≤ j≤ n∈ℤ^n : w_0 = 0, w_n = -1, w_j≥ 0 and w_j+1 - w_j ≥ -1 for all 0≤ j < n }.
(ii) For all m∈ℕ, we denote by R_mt the tree t restricted to its m+1 first vertices (with respect to the lexicographic order). Namely,
R_m t = {u_j(t) : 0≤ j≤min(m,#t-1)}
where (u_j(t)) stands for the vertices of t listed in lexicographic order. Then, R_m t is a measurable function
of ( W_j(t) ; 0 ≤ j ≤min(m,# t) ).
(iii) Let (W_n)_n≥ 0 be a ℤ-valued random walk whose jump distribution is given by (<ref>).
We set 𝙷_1= inf{ j∈ℕ : W_j = -1}, which is an a.s. finite stopping time since [W_1]=0.
Then,
( W_j (τ) )_0≤ j≤#τ(law)=( W_j )_0≤ j≤𝙷_1.
In particular, #τ and 𝙷_1 have the same law.
(iv) More generally, for all p∈ℕ, we set
𝙷_0= 0 and 𝙷_p = inf{ j∈ℕ : W_j= -p } .
Then there is an i.i.d. sequence (τ_p)_p∈ℕ of (μ )-trees such that
∀ p∈ℕ, (p+W_𝙷_p +j)_0≤ j≤𝙷_p+1-𝙷_p = ( W_j (τ_p) )_0≤ j ≤#τ_p .
§.§ Limit theorems.
In this section we recall - mostly from Bingham, Goldies & Teugels <cit.> and Feller <cit.> - limit theorems for sums of i.i.d. random variables belonging to the domain of attraction of stable laws. We also recall useful limit theorems for Galton-Watson trees and random walks from Berger <cit.>, Duquesne <cit.>, Kortchemski <cit.>, and Kortchemski & Richier <cit.>.
Regularly and slowly varying functions. Recall that a measurable and locally bounded function l:(0,∞)→ (0,∞) is slowly varying at infinity (resp. at 0^+) if l(cx)∼ l (x) as x→∞ (resp. as x→ 0^+) for all c∈ (0, ∞).
Also recall that f:(0,∞)→ (0,∞) is regularly varying of index α∈ℝ at ∞ (resp. at 0^+)
if there exists a slowly varying function l at ∞ (resp. at 0^+) such that f(x)=x^α l (x).
Below we gather in a single proposition several well-known results on slowly and regularly varying functions that are used in this article.
Let l be a slowly varying function at ∞. Then the following holds true.
(i) (Potter's bound) For all ε∈ (0, ∞) and all c ∈ (1, ∞),
there exists x_0∈ (0, ∞) such that for all x∈ [x_0, ∞) and all λ∈ [1, ∞), it holds
1/cλ^-ε≤l( xλ )/l(x)≤ c λ^ε .
Therefore, ln l (x)= o (ln x) and if f is regularly varying with index ρ∈ℝ\{ 0}, then ln f(x) ∼ρln x. Moreover, if x_n∼ y_n→∞ then l(x_n)∼ l(y_n) and f(x_n)∼ f(y_n).
(ii) (Karamata's Abelian Theorem for Tails) Let ρ∈ (0, ∞). Then ∫^∞ y^-1-ρ l (y) y < ∞ and, as x→∞,
∫_x^∞ y^-1-ρ l (y) y ∼ 1ρ x^-ρ l (x) and ∫_1^x y^ρ-1 l (y) y ∼ 1ρ x^ρ l (x) .
(iii) Suppose that ∫^∞ y^-1 l (y) y <∞ and set l (x) = ∫_x^∞ y^-1 l (y) y. Then l is slowly varying at ∞ and lim_x→∞l (x) / l (x) = ∞.
(iv) (Monotone Density Theorem) Let u: [0, ∞) →ℝ be a locally Lebesgue integrable function and set U(x)= ∫_0^x u(y) y for all x∈ℝ. Assume that there are c,ρ∈ (0, ∞) such that U(x) ∼ c x^ρ l(x) as x→∞ and furthermore assume that u is ultimately monotone. Then u(x) ∼ cρ x^ρ-1 l(x) as x→∞.
(v) (Karamata's Abelian Theorem for Laplace Transform) Let U: [0, ∞) → [0, ∞) be a measurable function such that U(λ): = λ∫_0^∞ e^-λ x U(x) x < ∞ for all λ∈ (0, ∞). Assume that there are c ∈ (0, ∞) and ρ∈ (-1, ∞) such that U (x) ∼c/Γ (1+ρ) x^ρ l (x) as x→∞, then
U (λ) ∼ cλ^-ρ l(1/λ) as λ→ 0^+.
Proof. For (i), see e.g. <cit.>. For (ii), see e.g. <cit.>. For (iii), see e.g. <cit.>. For (iv), see e.g. <cit.>. For (v), see e.g. <cit.>.
Limit theorems: the cases where α∈ (1, 2]. We next recall equivalent formulations of the property for a probability measure μ on ℕ to belong to the domain of attraction of a stable law of index α∈ (1, 2] (we handle 1-stable laws separately).
Let μ be a probability measure on ℕ that satisfies (<ref>). Recall from (<ref>) the definition of ψ. Let (W_n)_n∈ℕ be a
random walk whose jumps distribution is specified in (<ref>). Let α∈ (1, 2]. Then, the following assertions are equivalent.
(a) μ belongs to the domain of attraction of an α-stable law.
(b) There exists L : (0, ∞)→ (0, ∞) that varies slowly at ∞ such that if α∈ (1, 2) then μ ([n ,∞))∼
n^-α L(n), and if α = 2 then
∑_0≤ k≤ n k^2μ (k) - 1∼ 2L(n) which is ultimately positive by (<ref>).
(c)
If α∈ (1, 2) then ψ (s) ∼_0^+α -1/Γ (2-α) s^α L(1/s), and if α = 2 then ψ (s) ∼_0^+ s^2 L(1/s).
(d) There exists a (0, ∞)-valued sequence (a_n)_n∈ tending to ∞ such that 1/a_nW_n converges in law to the spectrally positive α-stable random variable X_α whose Laplace exponent is given for all λ∈ [0, ∞) by ln [ exp ( -λ X_α)] =α -1/Γ (2-α)λ^α if α∈ (1, 2) and by λ^2 if α = 2.
Moreover, if one of the four equivalent assumptions from above holds true, then
a^α _n∼ n L(a_n) and a_n∼
n^1/α L^*(n) where the function L^*:x∈ (0, ∞)↦ x^-1/αinf{ y∈ (0, ∞) : y^α / L(y) > x } is slowly varying at ∞.
Proof. For (a) ⇔ (b), see e.g. <cit.>. The equivalence (a)⇔ (d) follows from the definition of the domain of attraction of a stable law: the limiting law is necessarily spectrally positive since μ is supported by ℕ and among the spectrally positive α-stable types, it is always possible to choose a centered one as α∈ (1, 2] (see e.g <cit.> and <cit.>).
For (b) ⇔ (c) see e.g. <cit.>. More precisely, recall from (<ref>) the definitions of φ and ψ. Then, for all λ∈ (0, ∞), set
f_1 (λ):= φ (e^-λ) -1 + λ = ψ (1 - e^-λ)+ 12λ^2+O (λ^3).
If α∈ (1, 2) then <cit.> asserts that (b) is equivalent to f_1(λ) ∼_0^+α -1/Γ (2-α)λ^α L( 1/λ), which implies that (b) ⇔ (c) in these cases (with Proposition <ref> (i)).
If α = 2 then <cit.> asserts that (b) is equivalent to f_1(λ) ∼_0^+λ^2(1/2 +L( 1/λ)) and (<ref>) implies that (b) ⇔ (c).
Let us prove the last point of the proposition. We assume that (a-d) hold true.
By Grimvall <cit.>, lim_n→∞ [ exp (-λ/a_n W_n) ]= [ e^-λ X_α ] for all λ∈ (0, ∞). Observe that [ exp (-λ W_n) ] = (e^λ (f_1 (λ) +1 -λ))^n. If α∈ (1, 2),
we easily get nf_1 (λ /a_n) ∼α -1/Γ (2-α)λ^α as n→∞ for all λ∈ (0, ∞) and thus a^α_n∼ n L(a_n). If α = 2, we get
nλa_n + n ln[ 1 - λa_n(1 -λa_n( 12 + L (a_n) ) (1+ o(1)) ) ] ∼n L(a_n)a_n^2λ^2,
which implies that
a_n^2 ∼ n L(a_n). In both cases, we observe that a_n^α / L(a_n)∼ n and we use <cit.> to complete the proof of the proposition.
We next recall standard results on the size of Galton-Watson trees, which are expressed in terms of random walks
thanks to Proposition <ref>.
Let μ be a probability measure on ℕ that satisfies (<ref>).
Let (W_n)_n∈ℕ be a random walk whose jumps distribution is specified by (<ref>).
Recall from (<ref>) the definition of the stopping times (𝙷_p)_p∈ℕ.
Let τ be a (μ)-tree. Then the following holds true.
(i) (𝙷_p)_p∈ℕ is a random walk with positive jumps.
(ii) (Kemperman) (𝙷_p= n) = p/n (W_n = -p) for all n,p∈ℕ^*.
(iii) Suppose that μ satisfies Proposition <ref> (a-d) and is aperiodic. Then, (#τ = n) ∼c_α/na_n and
(#τ≥ n) ∼α c_α/a_n where c_α is the value at 0 of the (continuous version) of the density of X_α, and where (a_n) and X_α are as in Proposition <ref> (d).
(iv) Suppose that μ satisfies Proposition <ref> (a-d).
Recall that |τ| stands for the height of τ. Then,
(|τ|≥ n)/ψ((|τ|≥ n))∼ (α-1)n.
Proof. Note that (i) is an immediate consequence of the Markov property and the left-continuity of W. For
(ii), see e.g. <cit.>. Let us prove (iii). By (ii) and by Proposition <ref> (iii), we see that (#τ= n)= 1/n (W_n = -1). We next use Gnedenko's local limit Theorem (see e.g. <cit.>) to get lim_n→∞|a_n (W_n = -1) - c_α | = 0 and thus (#τ = n) ∼c_α/na_n.
Since (a_n) varies regularly with index 1/α by Proposition <ref>, we get (#τ≥ n) ∼α c_α/a_n by Karamata's Abelian Theorem for Tails (see Proposition <ref> (ii)). For (iv), see Slack <cit.>. We next recall two limit theorems that are used to prove Theorem <ref>.
One follows from the convergence of rescaled (μ)-trees to stable trees due to Duquesne <cit.>.
The other is the uniform integrability of the density of the law of (roughly speaking)
the Lukasiewicz path of a (μ)-tree τ under ( · | #τ = n) with respect to ( · | #τ≥ n) that has been proved in Kortchemski <cit.>.
Assume that μ satisfies (<ref>), that it is aperiodic, and that it belongs to the domain of attraction of a stable law of index α∈(1,2]. More precisely, we assume that Proposition <ref> (d) holds true.
Let τ be a (μ)-tree. Recall from (<ref>)
that |τ| stands for the maximal height of τ, and that W(τ) stands for its Lukasiewicz path as in Definition <ref>. Then, the following holds true.
(i)There exists a random variable M∈(0,∞) such that
a_n/n|τ| under ( · | #τ=n) converges in law to M as n→∞.
(ii) Let r∈(0,1). Then for all large enough n∈ℕ, there is a function D^(r)_n :ℕ→ [0,∞) such that
[ f( W_min(⌊ nr ⌋,·) (τ)) | #τ = n ]=[ f(W_min(⌊ nr ⌋,·) (τ))
D_n^(r)( W_⌊ rn⌋(τ) ) | #τ≥ n ]
for all bounded function f :ℕ^ℕ→ [0,∞). Moreover, these functions satisfy
lim_c→∞lim sup_n→∞[ _{ D_n^(r) (W_⌊ rn⌋(τ))≥ c}
D_n^(r)( W_⌊ rn⌋(τ) ) | #τ≥ n ] =0.
Proof. The point (i) is a consequence of the convergence of rescaled Galton Watson trees to the α-stable tree: see Duquesne <cit.>. Here, M is the height of the normalized α-stable tree that is a (0, ∞)-valued random variable. See Kortchemski <cit.> for (<ref>). For (<ref>), <cit.> shows that D_n^(r)(a_n · ) uniformly converges on all compact intervals of (0,∞) towards a continous function. Moreover, the laws of a_n^-1W_⌊ rn⌋(τ) under ( · | #τ=n) are tight in (0,∞) (see <cit.>), which completes the proof of (<ref>) by (<ref>).
Limit theorems: the 1-stable case. We now consider the spectrally positive 1-stable law, which features more complicated behaviors.
Let μ be a probability measure on ℕ that satisfies (<ref>). Recall from (<ref>) the definition of ψ. Let (W_n)_n∈ℕ be a random walk whose jumps distribution is given by (<ref>).
(i) The following assertions are equivalent.
(a)μ belongs to the domain of attraction of a 1-stable law.
(b)
There exists L : (0, ∞)→ (0, ∞) that varies slowly at ∞ such that ∫^∞ y^-1 L(y) y <∞ and such that
μ ([n ,∞)) ∼
n^-1 L(n).
(c) There exists a (0, ∞)-valued sequence (a_n)_n∈ tending to ∞ such that 1/a_n (W_n+b_n) converges in law to X, where b_n = n [W_1 _{ |W_1| >a_n} ] and where X is the spectrally positive 1-stable random variable whose Laplace exponent is given for all λ∈ (0, ∞) by ln [ exp ( -λ X)] =λlnλ.
(ii) Assume that (a-c) hold true. Then, ∑_k≥ n kμ (k) ∼ℓ (n) where ℓ is the slowly varying function defined by
∀ x∈(0,∞), ℓ (x) = ∫_x^∞L(y)/y y.
This implies that ψ (s) ∼_0^+ s ℓ (1/s). Moreover, L(x) = o(ℓ (x)) as x→∞.
(iii) Assume that (a-c) hold true. Then,
a_n∼ n L(a_n) and a_n∼
n L^*(n) where the function L^*: x∈ (0, ∞)↦ x^-1inf{ y∈ (0, ∞) : y/ L(y) > x } varies slowly at ∞. Moreover, b_n ∼ n ℓ (a_n) where ℓ is given by (<ref>) and therefore a_n=o(b_n).
Proof. The equivalences (a) ⇔ (b) and (a) ⇔ (c) are proved as in Proposition <ref> (for the form of b_n, see e.g. <cit.>). To prove (ii), first observe that
∑_k≥ n kμ (k) = ∑_j≥ 1μ ([max(n,j) , ∞))= n μ ([n , ∞)) +
∑_j>nμ ([ j , ∞)). Then note that n μ ([n , ∞)) ∼ L(n) and that ∑_j>nμ ([ j , ∞)) ∼ℓ (n) where ℓ is defined by (<ref>). By Proposition <ref> (iii), L(n)= o(ℓ (n)) and we get ∑_k≥ n kμ (k) ∼ℓ (n). By <cit.>, this is equivalent to f_1(λ) ∼_0^+λℓ (1/λ)
where f_1 is given by (<ref>). This implies ψ (s) ∼_0^+ s ℓ (1/s) by Proposition <ref> (i), which completes the proof of (ii).
Then, we prove a_n∼ n L(a_n) and a_n∼
n L^*(n) as in Proposition <ref>. Next observe that b_n = n ∑_k>a_n kμ (k+1)∼ nℓ (a_n). Thus a_n/b_n ∼ L(a_n)/ ℓ (a_n) → 0 by (ii). This completes the proof of (iii).
Suppose that Proposition <ref> (c) holds true.
Since a_n=o(b_n), it holds b_n^-1 W_⌊ ns ⌋→ -s in law, and thus in probability, for all s∈ [0, ∞). Standard arguments (or a stronger result such as Skorokhod <cit.>) entail the following convergence
(1b_nW_⌊ ns⌋ ; s∈ [0, ∞) )⟶( -s ; s∈ [0, ∞))
in probability for the topology of uniform convergence on all compact intervals. Recall from (<ref>) the definition of the stopping times (𝙷_p)_p∈ℕ. Then, (<ref>) implies for all x∈ [0, ∞) that
1n𝙷_⌊ b_n x ⌋⟶ x
in probability. Namely the law of 𝙷_1 is relatively stable (see e.g. <cit.>). Since the total size of a (μ)-tree τ has the same distribution as 𝙷_1 (by Proposition <ref> (iii)), the law of #τ is thus relatively stable.
By use of Berger <cit.> and Kortchemski & Richier <cit.>, we get the following.
Let μ be a probability measure on ℕ that satisfies (<ref>) and that belongs to the domain of attraction of a 1-stable law. Let τ be a (μ)-tree. Then,
(#τ≥ n) ∼L(b_n)/b_n ℓ (b_n),
where (b_n), L and ℓ are as in Proposition <ref>. If the more restrictive assumption (<ref>) holds, then
(#τ=n)∼1/n(#τ≥ n)∼L(b_n)/b_n^2.
Proof. Since b_n / n ∼ℓ (a_n) → 0 as n→∞, (<ref>) implies that there is a sequence (c_n) tending to ∞ such that 1/c_n𝙷_𝚗→ 1 in probability: namely, the law of 𝙷_1 is relatively stable. Then, <cit.> asserts the existence of a function l that slowly varies at ∞ such that
∑_0≤ k≤ n ( 𝙷_1≥ k)∼ l(n) and 1- [ e^-λ𝙷_1 ] ∼_0^+λ l (1/λ). Thus, (<ref>) easily entails that ℓ (a_n) l (n) → 1. By Berger <cit.>, we get ℓ (a_n)∼ℓ (b_n) and thus,
∑_0≤ k≤ n (#τ≥ k) = ∑_0≤ k≤ n ( 𝙷_1 ≥ k) ∼ 1/ℓ (b_n). By Kortchemski & Richier <cit.>, we get (<ref>)
because within the notations of <cit.>, we necessarily get Λ (n) ∼ 1/ℓ (b_n). Moreover, Berger <cit.> asserts that if (<ref>) holds then (W_n = -1) ∼ nL(b_n)/b_n^2, so (<ref>) follows from Kemperman's identity (Proposition <ref> (ii)).
Although the difficult part of (<ref>) is the very content of Kortchemski & Richier <cit.>, the relatively explicit form of the right member of (<ref>) seems novel under the sole assumption that μ belongs to the domain of attraction of a 1-stable law. □
We next recall two limit theorems on the maximal out-degree of a (μ)-tree when μ belongs to the domain of attraction of a 1-stable law. They are part of more general results due to Kortchemski & Richier <cit.>.
Let τ be a (μ)-tree with offspring distribution μ that satisfies (<ref>) and that belongs to the domain of attraction of a 1-stable law. Recall from (<ref>) that the notation Δ (τ) stands for the maximal out-degree of τ. Then, the following holds true.
(i) The following convergence holds in distribution on [0, ∞):
1/b_nΔ(τ) under ( · | #τ≥ n) ⟶ J,
where the law of J is given by (J≥ x)=1/x for all x∈ [1, ∞).
(ii) Under the more restrictive assumption (<ref>), the following convergence holds in probability:
1/b_nΔ(τ) under ( · | #τ = n) ⟶ 1.
Proof. For (i), see <cit.>. For (ii), see <cit.>.
Recall that W(τ) stands for the Lukasiewicz path of τ, as in Definition <ref>. We conclude this section by recalling a result from Kortchemski & Richier <cit.> that shows that the law
of W(τ) under ( · | #τ= n) is closed in variation distance to the law of
the Vervaat transform of the path (W_0, W_1, …, W_n-1 , -1) under .
More precisely, let (W_n)_n∈ℕ be as in Proposition <ref>. For all n∈ℕ^*, we introduce the following notations.
I_n = - min_ 0≤ j ≤ n-1 W_j and σ_n = inf{0 ≤ k ≤ n - 1 : W_k = -I_n }
and
Z_j^(n) =
W_σ_n +j + I_n if 0 ≤ j < n - σ_n ,
I_n - 1 + W_j-(n-σ_n) if n ≥ j ≥ n - σ_n.
Namely, Z^(n) is constructed by reading the increments of
(W_0,W_1,...,W_n-1,-1) from left to right in cyclic order by starting at time σ_n: this is a kind of Vervaat transform of (W_0,W_1,...,W_n-1,-1) (see Vervaat <cit.> for more details).
We shall use Kortchemski & Richier <cit.>, which we re-state by convenience
into the following proposition.
Assume that μ satisfies (<ref>) and (<ref>). We keep the above notation. Then,
sup_A∈ℬ(ℝ^n+1 )| (W (τ) ∈ A | #τ = n)-
(Z^(n)∈ A) | ⟶ 0,
where ℬ(ℝ^n+1) stands for the Borel sigma-field of ℝ^n+1.
Proof. See Kortchemski & Richier <cit.> and note (I_n>1)→ 1 by e.g. (<ref>).
§ DISTRIBUTION OF THE HORTON-STRAHLER NUMBER OF GALTON-WATSON TREES
§.§ Alternative definitions of the Horton-Strahler number and basic results
In this section, we prove basic results on the Horton-Strahler number: firstly, we provide alternative definitions, secondly
we state a key upper bound in Lemma <ref>, and finally we prove in Lemma <ref> a recursive equation satisfied by the tail of the Horton-Strahler of (μ)-trees that is the starting point of the analysis of their asymptotic behavior.
Let us first recall alternative definitions of the Horton-Strahler number. The first one uses Horton pruning of a finite tree t∈𝕋 that is defined as follows: remove the leaves of t and merge each line into one edge (a line in t is a maximal sequence of vertices v_0, …, v_n ∈ t such that k_v_1 (t) = … = k_v_n-1 (t) = 1 and v_j = v_ j+1 for all 0≤ j < n). The resulting tree is called the Horton-pruned tree, which we denote here by 𝙿𝚛𝚞𝚗 (t). Then, (t) is the minimal number of Horton prunings that are necessary to obtain
{∅} from t. Namely,
(t)= min{n ∈ℕ : 𝙿𝚛𝚞𝚗_n (t) = {∅}},
where 𝙿𝚛𝚞𝚗_n stands for the n-th iteration of 𝙿𝚛𝚞𝚗 for all n∈^* and where 𝙿𝚛𝚞𝚗_0 stands for the operation that merges each line into one edge. We refer to Kovchegov & Zaliapin <cit.> for a proof and more details.
Another useful definition uses embeddings of perfect binary trees. More precisely, let t, t^'∈𝕋 be finite. Then, ϕ: t→ t^' is an embedding if it is injective and if
ϕ (u∧ v) = ϕ (u) ∧ϕ (v) for all u, v ∈ t. For all n∈ℕ, we denote by T_2,n = ⋃_0≤ k≤ n{ 1,2}^k the n-perfect binary tree, with the convention that { 1, 2}^0 = {∅}. Then, for all finite tree t∈𝕋,
(t)= max{ n∈ℕ : ∃ϕ : T_2, n→ t embedding}.
This result seems to be `part of the folklore'. Let us however provide a short proof.
Proof of (<ref>). We reason by induction on the height of t. Note that (<ref>) obviously holds true
if |t|=0. Now assume k:=k_∅(t)≥ 1 and let ϕ:T_2,n+1→ t be an embedding, we separate the cases according to the positions of ϕ(∅),ϕ(1),ϕ(2).
If (i)≼ϕ(∅) with 1 ≤ i ≤ k, then we have (i)≼ϕ(u) for all u∈ T_2,n+1 by definition, and we check that setting ϕ(u)=(i)*ϕ_i(u) defines an embedding ϕ_i:T_2,n+1→θ_(i)t. Conversely, an embedding ϕ_i:T_2,n+1→θ_(i)t induces an embedding ϕ:T_2,n+1→ t such that (i)≼ϕ(∅). Thus,
max{n∈ : ∃ϕ : T_2, n→ t embedding,(i)≼ϕ(∅)}=(θ_(i)t).
Otherwise, ∅=ϕ(∅)=ϕ(1)∧ϕ(2) so we have distinct 1≤ i, j ≤ k such that (i)≼ϕ(1) and (j)≼ϕ(2). Similarly as before, we see that setting ϕ((1)*u)=(i)*ϕ_i(u) and ϕ((2)*u)=(j)*ϕ_j(u) respectively defines two embeddings ϕ_i:T_2,n→θ_(i)t and ϕ_j:T_2,n→θ_(j)t. Conversely, two embeddings ϕ_i:T_2,n→θ_(i)t and ϕ_j:T_2,n→θ_(j)t induces an embedding ϕ:T_2,n+1→ t such that ϕ(∅)=∅, (i)≼ϕ(1), and (j)≼ϕ(2). Thus,
max{n∈ : ∃ϕ : T_2, n→ t embedding,(i)≼ϕ(1),(j)≼ϕ(2)}=1+min((θ_(i)t),(θ_(j)t)).
Taking the maximum over ϕ(∅),ϕ(1),ϕ(2) and recalling Definition <ref> concludes the proof.
The definition (<ref>) immediately implies the following. Let t, t^'∈𝕋 be finite.
If there is an embedding ϕ: t → t^', then (t)≤(t^').
We next use (<ref>) and (<ref>) to get an upper bound of (t) in terms of (R_m t), where R_m t is defined in (<ref>) (recall that it
is the tree consisting in the first m+1 vertices of t in lexicographic order) and of R_m t^⋆, where t^⋆ is the mirror image of t that is formally defined as follows.
Let t∈𝕋 and u ∈ t \{∅} be the word (j_1, …, j_n). We set u_|0 = ∅ and u_| p = (j_1, …, j_p) for all 1≤ p≤ n. Then, the mirror image of u is the word u^⋆ = (j_1^⋆, …, j_n^⋆) where
j^⋆_p:= k_u_|p-1 (t)-j_p +1 , 1≤ p ≤ n.
We also set ∅^⋆=∅. Then t^⋆ = { u^⋆ : u∈ t } and it is easy to show that t^⋆∈𝕋. Observe that u↦ u^⋆ is a bijective embedding so (t^⋆)=(t) by (<ref>). We stress that the word u^⋆ depends on the tree t on which u is observed. Nevertheless, this notation should not lead to confusion here because the underlying tree will always be clear according to context. Since k_u^⋆ (t^⋆)= k_u(t), (<ref>) implies that if τ is a (μ)-tree whose offspring distribution μ satisfies (<ref>), then so is τ^⋆. Furthermore, since #τ^⋆ = #τ, if n ∈ℕ^* is such that (#τ = n) > 0, then
we easily check that
under ( · | #τ = n), τ^⋆(law)=τ.
The following lemma plays a key role in the proofs of Theorems <ref> and <ref>.
Let t∈𝕋 be finite. Then the following holds true.
(i) For all u∈ t, we set t_≤ u = { v ∈ t : v≤ u}. Then,
(t) ≤ 1+ max( (t_≤ u) , max{(θ_v t) : v ∈ t, v≼ u and v > u }).
(ii)Let m∈ℕ be such that 2m≥#t+|t|. Then,
(t) ≤1+max((R_m t),(R_m t^⋆)).
(iii) If τ is a random finite tree such that τ^⋆ has the same law as τ, then for all n, m∈ℕ^*, we get
((τ) ≥ n ) ≤ 2( ( R_m τ) ≥ n - 1 )+ ( #τ + |τ| > 2m ).
Proof. We first prove (i). If (t) = 0, then (<ref>) is obviously true.
We next suppose that (t) = n ≥ 1, and to simplify notation, we set
∅ , u ={ v ∈ t : v≼ u} and B= { v ∈ t : v≼ u and v > u} .
By (<ref>), there is an embedding ϕ : T_2, n→ t. Observe that ϕ (1) and ϕ (2) cannot both belong to ∅ , u, otherwise it would imply ϕ (1)∧ϕ (2) =
ϕ (∅)∈{ϕ(1),ϕ(2)} (by definition of embeddings since ∅ = (1) ∧ (2)), which contradicts the injectivity of ϕ. Thus, there is j∈{ 1, 2} such that either
ϕ (j) ∈ t_≤ u\∅ , u or ϕ (j) ∈ t\ t_≥ u.
In the first case, by definition of embeddings, ϕ(j)≼ϕ((j)*v) for all v∈ T_2,n-1 and so ϕ((j)*T_2, n-1 ) ⊂ t_≤ u. Then, (<ref>) entails n - 1≤(t_≤ u). Suppose next that ϕ (j) ∈ t\ t_≥ u. Since
t\ t_≥ u is the disjoint union of the v ∗ (θ_v t) for v∈ B, there exists v∈ B such that
ϕ ((j)∗ T_2, n-1 ) ⊂ϕ (j) ∗ (θ_ϕ (j) t )⊂ v∗ ( θ_v t ). Then, (<ref>) entails n - 1≤max_v∈ B(θ_v t ), which completes the proof of (<ref>).
Let us now prove (ii). To that end, we denote by u the ≤-minimal leaf of θ_u t. We also set t_≥ u = { v ∈ t : v ≥u}∪∅, u. By definition, t = t_≤ u∪ t_≥ u and ∅, u = t_≤ u∩ t_≥ u. Then, note that for all v∈ B, v ∗ (θ_v t) ⊂ t_≥ u. Moreover,
note that (t_≥ u)^⋆ = t^⋆_≤u^⋆. Therefore, (<ref>) and (i) imply
(t) ≤ 1+ max( (t_≤ u), (t_≥ u) ) =1+ max( (t_≤ u), (t^⋆_≤u^⋆) ).
Next set m+1 = # t_≤ u =#{ v∈ t : v≤ u } and
m^'+ 1 = # t_≥ u = #{ v∈ t^⋆ : v≤u^⋆}.
Observe that R_m t = t_≤ u and R_m^' t^⋆ = t^⋆_≤u^⋆.
Moreover, #t = # t_≤ u + # t_≥ u - #∅ , u = m+ m' + 1 - |u|.
Thus, m+ m^' < # t + |t|. If
2m≥#t+|t|, then m^' < 1/2 (# t + |t|) ≤ m and R_m^' t^⋆⊂ R_m t^⋆. By (<ref>), we get (R_m^'t^⋆)≤(R_mt^⋆) and we obtain (<ref>) by (<ref>).
Inequality (<ref>) is an easy consequence of (<ref>): we leave the details to the reader.
We next prove the main equation that is satisfied by the tail distribution of the Horton-Strahler number of a Galton-Watson tree.
Let τ be a (μ)-tree whose offspring distribution μ satisfies (<ref>). Recall φ and ψ from (<ref>). For all n∈ℕ, we set q_n= ((τ)>n). Then,
1-q_0= μ (0)/1-μ(1) and 1-q_n+1=φ(1-q_n)+(q_n-q_n+1)φ'(1-q_n), n∈ℕ.
This equation can be rewritten in terms of ψ as q_n - q_n+1 = ψ (q_n)/ψ' (q_n).
Proof. By Definition <ref>, (τ) = 0 if and only if k_∅ (τ) = 0 or (k_∅ (τ) = 1 ; (θ_(1)τ ) = 0). Thus, by Definition <ref>, we get ((τ) = 0) = μ (0)+ μ(1) ((τ) = 0), which gives the first equality in (<ref>).
Let us prove the recursive relation in (<ref>). Let n∈ℕ. By Definition <ref>, (τ )≤ n+1 if and only if (θ_uτ) ≤ n for all children u of ∅ in τ (if any) with the possible exception
of one child v, which may satisfy (θ_vτ)=n+1. More precisely,
_{(τ)≤ n+1}= ∏_1≤ j ≤ k_∅(τ)_{(θ_(j)τ)≤ n } + ∑_1≤ j ≤ k_∅(τ)_{(θ_(j)τ)=n+1 }∏_1≤ i≤ k_∅(τ)
j≠ i_{(θ_(i)τ)≤ n } .
Taking the expectation yields (<ref>) by Definition <ref> and since φ is the generating function of μ.
Although it seems difficult to solve (<ref>) explicitly in general, it can be done actually for the so-called
α-stable offspring distribution μ_α, α∈ (1,2],
whose generating function is
∀ s∈[0,1], φ_α(s)=s+1α(1-s)^α.
Namely, if τ_α is a (μ_α)-tree then (τ_α) is a geometric random variable with parameter 1/α, i.e.
((τ_α)=n)= 1α(1 -1α)^n, n∈ℕ.
This is explicitly proved in Kovchegov & Zaliapin <cit.> and earlier for the α = 2 case, see Burd, Waymire & Winn <cit.>. See also Duquesne & Winkel <cit.> who show that
α-stable offspring distributions are the only laws that are invariant under any hereditary pruning.
We end this section with a lemma that lists basic properties of ψ, that are useful to analyse (<ref>).
Let μ be a probability on ℕ that satisfies (<ref>). Recall from (<ref>) the definition of ψ. Then, the following holds true.
(i) ψ is nonnegative, increasing, strictly convex, and analytic on (0, 1]. Moreover, ψ' is increasing, concave, and
ψ (0) = ψ' (0) = 0.
(ii) For all s∈ (0, 1], 1/2ψ'(s)≤ψ(s)/s≤ψ'(s).
Proof. The point (i) is elementary.
The upper bound in (ii) is a consequence of the convexity of ψ and to obtain the lower
bound we apply Hermite-Hadamard inequality that asserts that for all convex functions f: [a, b] →ℝ, it holds
f ( 12(a+b) ) ≤1/b - a∫_a^b f(t) t ≤12( f(a) + f(b) ).
We apply (<ref>) to ψ' after observing 1/sψ (s) = 1/s∫_0^sψ'(s) s.
§.§ Tail estimates of joint laws
In this section, we provide four estimates of the tail of the joint distribution of the Horton-Strahler number of a Galton-Watson tree with either its size, its height, or its maximal out-degree. We fix an offspring distribution μ that satisfies (<ref>) and τ stands for a (μ)-tree. Recall from (<ref>) the definition of φ and ψ. Observe that ψ' (1) = 1 -φ' (0) = 1 -μ(1) > 0 by (<ref>).
It is convenient to write
q_n = ((τ)>n), n ∈{ -1}∪ℕ,
where q_-1 = 1 obviously.
Let τ be a (μ)-tree where μ satisfies (<ref>).
Then, for all n∈ℕ,
[#τ_{(τ)= 0 }] = μ(0)/(1-μ (1))^2 and ψ'(q_n-1)[ #τ_{(τ)≤ n }] ≤ 2.
Proof. Although [#τ ] = ∞, let us first prove that e_n := [#τ_{(τ)≤ n}] <∞.
By Definition <ref>, if (τ)≤ n then (θ_uτ)≤ n for all children u of ∅ in τ (if any) and (θ_uτ)= n for at most one child.
This implies that for all m, n∈,
min(m,#τ) _{(τ)≤ n }≤ 1+∑_i=1^k_∅(τ)min(m,#θ_(i)τ) _{(θ_(i)τ)≤ n-1 }
+∑_i=1^k_∅(τ)min( m, #θ_(i)τ) (θ_(i)τ)≤ n∏_1≤ j≤ k_∅(τ)
j≠ i(θ_(j)τ)≤ n-1
which makes sense even when n = 0: the first sum in the right-hand side of (<ref>) being null.
To simplify notations, we set e_n (m) = [min( m,#τ ) _{(τ)≤ n}] for all integers m≥ 0 and n≥ -1 (with e_-1 (m) = 0).
Taking the expectation in (<ref>) gives e_n (m)≤ 1+e_n-1 (m)+ e_n (m) φ'(1 - q_n-1 ).
It easily implies that e_n ≤(1+ e_n-1)/(1 -φ'(1 - q_n-1 )) because lim_m→∞ e_n(m) = e_n. This recursively entails e_n < ∞ for all integer n ≥ -1.
To simplify notations, we set k_∅ = k_∅ (τ),
τ_j = θ_(j)τ and _j = ( θ_(j)τ) for all 1≤ j≤ k_∅ (τ).
We first explicitly compute e_0 by observing that #τ_{(τ) = 0} = _{ k_∅ =0}+ _{ k_∅ =1} (1+ τ_1) _{_1 = 0 }. Taking the expectation entails e_0 = μ (0)+ μ(1) (1 - q_0 + e_0). By (<ref>), this becomes
e_0= [#τ_{(τ)= 0}] =1-q_0/1-μ(1)= μ(0)/(1-μ (1))^2.
Next, by using the fact that #τ = 1+ ∑_1≤ j≤ k_∅#τ_j and by the decomposition (<ref>), we get
#τ _{(τ)≤ n+1 }= _{(τ)≤ n+1 } + ∑_j=1^k_∅#τ_j _{_j ≤ n }∏_1≤ i≤ k_∅
j≠ i_{_i ≤ n }
+∑_j=1^k_∅#τ_j _{_j = n+1 }∏_1≤ i≤ k_∅
j≠ i_{_i ≤ n } + ∑_1≤ i, j≤ k_∅
j≠ i#τ_j _{_j ≤ n }_{_i =n+1 }∏_1≤ l≤ k_∅
l≠ i,j _l ≤ n.
Taking the expectation term-by-term, we get
e_n+1=1-q_n+1+e_nφ'(1-q_n)+(e_n+1-e_n)φ'(1-q_n)+e_n(q_n-q_n+1)φ”(1-q_n)
for all n∈ℕ. Recall from (<ref>) that ψ (s) = φ (1 - s) - 1+s. By Lemma <ref>, we find
e_n+1ψ'(q_n)=1-q_n+1+e_n (q_n-q_n+1)ψ”(q_n)= 1-q_n+1+e_n ψ”(q_n)/ψ'(q_n)∫_0^q_nψ'(s) s
since ψ(0) = 0 by Lemma <ref> (i). Still from Lemma <ref>, we know that ψ' is concave, so we get ψ'(s) ≤ψ'(q_n) - (q_n - s)ψ”(q_n). Thus,
ψ”(q_n)/ψ'(q_n)∫_0^q_nψ'(s) s≤ q_n ψ”(q_n) - (q_nψ”(q_n))^2/2ψ'(q_n)= 12ψ'(q_n) ( 1 - (1 - x)^2 ) ,
where x= ψ”(q_n) / (ψ'(q_n)/q_n) belongs to [0, 1] since ψ' is concave. Thus, we get
x_n+1:= e_n+1ψ'(q_n)≤ 1+ 12 e_n ψ'(q_n) ≤ 1+ 12 e_n ψ'(q_n-1) = 1+ 12 x_n
since (q_n) is decreasing and ψ' is increasing. This entails x_n ≤ 2 -2^-n (1-x_0), which leads to (<ref>)
because x_0 = ψ'(1)e_0 = (1 -μ(1))e_0 = 1 - q_0 < 1 by (<ref>).
For all t∈𝕋, we set
Z(t)=max{ |u| : u ∈ t such that (θ_u t) = (t) }.
Note that Z(t) ≤ |t| where recall from (<ref>) that |t| stands for the height of t.
Let τ be a (μ)-tree where μ satisfies (<ref>). Recall ψ from (<ref>) and q_n from (<ref>). Recall from (<ref>) that |τ| stands for the height of τ. Then, (Z (τ ) ≥ m | (τ) = n) = (1 - ψ'(q_n-1))^m
for all m, n∈ℕ. This implies that for all λ∈(0,∞),
lim sup_n→∞( ψ' (q_n-1)|τ|≤λ | (τ)= n )≤ 1-e^-λ.
By Definition <ref>, for all integers n≥ 0 and m≥ 1, it holds
_{ Z(τ)≥ m ; (τ)=n }=∑_i=1^k_∅(τ) Z(θ_(i)τ)≥ m-1 ; (θ_(i)τ)=n∏_1≤ j ≤ k_∅(τ)
j≠ i(θ_(j)τ)≤ n-1
By taking the expectation, we get
( Z(τ) ≥ m ; (τ) = n )= ( Z(τ) ≥ m - 1 ; (τ) = n ) φ' (1 - q_n-1) ,
which implies the first desired equality as φ' (1 - q_n-1) = 1 - ψ'(q_n-1). Note that ψ' (q_n-1)→ψ'(0) = 0. Thus, lim_n→∞ (Z (τ ) ≥λ / ψ' (q_n-1) | (τ) = n) = e^-λ which implies (<ref>) since Z(τ) ≤ |τ|.
In <cit.>, Brandenberger, Devroye & Reddad study the Horton-Strahler number of a Galton-Watson tree
conditioned to have exactly n vertices under the assumption that the variance of the offspring distribution
is finite. To that end, they use a (little more than) local convergence of the conditioned Galton-Watson tree
towards the corresponding size-biased tree. We adapt and extend this idea in a more general context using only
the Many-To-One Principle to get the following.
Let τ be a (μ)-tree where μ satisfies (<ref>). Recall ψ from (<ref>) and q_n from (<ref>).
For all integers
n ,m∈ℕ such that
2ψ'((|τ| ≥⌊ n/2⌋ )) ≤ψ'(q_m), the following inequality holds:
((τ) ≤ m | |τ| ≥ n) ≤exp( -18 n ψ'(q_m)).
For all n∈ℕ, we denote by φ_n the n-iterate of φ with the convention φ_0 =Id. It is classical that (|τ| < n) = φ_n (0). Then, observe the following: the ≤-smallest vertex of τ at height n is the only vertex u∈τ such that |u|=n and such that for all v∈τ with v < u and v≺ u,
we have |θ_vτ|+|v|<n.
Moreover for all p∈ℕ, (<ref>) implies that if (τ)≤ m then (θ_vτ)≤ m for all v∈τ. Therefore,
_{ |τ| ≥ n ; (τ) ≤ m }≤∑_u∈τ_{ |u|=n}∏_w∈τ,i≥ 1
w*(i)≼ u( ∏_j=1^i-1|θ_w*(j)τ|+|w*(j)|<n) (∏_j=i+1^k_w(τ)(θ_w*(j)τ)≤ m) .
Here we adopt the following convention: a product over an empty set of indices is taken equal to 1. Recall Definition <ref> of the size-biased (μ)-tree τ_∞ and recall from (<ref>) in Remark <ref> the joint law of the number of left/right siblings of individuals on the infinite line of descent. We now use the Many-To-One Principle (Proposition <ref>) after taking the expectation in (<ref>) to get
( |τ| ≥ n ; (τ) ≤ m )≤∏_p=0^n-1φ (1 - q_m) - φ(φ_p(0))/1 - q_m - φ_p(0).
We now use convexity properties of φ and ψ given in Lemma <ref> to get an upper bound of the right-hand side of (<ref>).
First observe that the convexity of φ implies that for all real numbers s,r ∈ [0, 1] such that s ≤ r, we have
(φ(r) -φ(s))/(r - s) ≤ (1 -φ(s))/(1 - s).
Therefore,
∏_p=0^⌊ n/2 ⌋-1φ (1 - q_m) - φ(φ_p(0))/1 - q_m - φ_p(0)≤∏_p=0^⌊ n/2 ⌋-11 - φ(φ_p(0))/1 - φ_p(0)= 1 -
φ_⌊ n/2 ⌋ (0) .
To get an upper bound of (φ (1 - q_m) - φ(φ_p(0)))/(1 - q_m - φ_p(0)) when
p≥⌊ n/2 ⌋, we use the following: let s, r∈ [0, 1] and suppose that 2ψ' (s) ≤ψ'(r), then
φ(1 - r) - φ(1 - s) /s - r = 1 - φ(1 - s) /s+
φ(1 - s)- 1 /s + φ(1 - r) - φ(1 - s) /s - r
= 1 - φ(1 - s) /s +ψ (s)/s - ψ(r) - ψ(s)/r - s
= 1 - φ(1 - s) /s +ψ (s)/s - 1/r - s∫_s^r ψ'(x) x
≤ 1 - φ(1 - s) /s +ψ'(s) - 12(ψ'(r) + ψ'(s) )
≤ 1 - φ(1 - s) /s -14ψ'(r)
≤1 - φ(1 - s) /s( 1 - 14ψ'(r) ) .
Here, we have used Hermite-Hadamard inequality (<ref>) for concave functions, and the two convexity inequalities 1/sψ(s)≤ψ'(s) and 1/s (1 -φ(1 - s) ) ≤φ'(1)=1.
Assume that m, n ∈ℕ satisfy 2ψ'( 1 - φ_⌊ n/2 ⌋ (0)) ≤ψ' (q_m). Then, for all p≥⌊ n/2 ⌋, we have
2ψ'( 1 - φ_p(0)) ≤ψ' (q_m). Applying (<ref>) successively with s = 1 - φ_p(0) and r = q_m gets us
(|τ|≥ n ; (τ)≤ m) ≤ ( 1 - φ_⌊ n/2 ⌋ (0) ) ( 1 - 14ψ'(q_m) )^n/2∏_p=⌊ n/2 ⌋^n-11 - φ_p+1(0)/1 - φ_p(0)
≤ ( 1 - 14ψ'(q_m) )^n/2 (|τ| ≥ n),
by (<ref>) and (<ref>). This easily entails (<ref>) since ln (1 - x) ≤ -x for all x∈ [0, 1).
Although Proposition <ref> holds under the sole assumption that μ satisfies (<ref>), its application requires knowing the behavior of the tail of the height |τ| of the (μ)-tree τ. When μ belongs to the domain of attraction of a stable law of index α∈(1,2], Proposition <ref> (iv) provides such information, and then Proposition <ref> entails the following more convenient result.
Let τ be a (μ)-tree where μ satisfies (<ref>) and belongs to the domain of attraction of a stable law of index α∈(1,2]. Recall ψ from (<ref>) and q_n from (<ref>). There exists a constant C_μ∈(0,∞) that only depends on μ such that for all integers
n ,m ∈ℕ, it holds
((τ) ≤ m | |τ| ≥ n) ≤ C_μexp( -18 n ψ'(q_m)).
To simplify the notations, we set s_n=(|τ|≥⌊ n/2⌋) for all n∈. By Proposition <ref> (iv), the constant 2c_μ:=sup_n∈(n+1)ψ(s_n)/s_n is finite and positive. Then, we get from Lemma <ref> (ii) that 2ψ'(s_n)≤ 8c_μ/(n+1) for all n∈, by definition of c_μ. Let us set C_μ=e^c_μ∈(0,∞) and let n,m∈. If ψ'(q_m)≥ 8 c_μ/(n+1), then 2ψ'(s_n)≤ψ'(q_m) so Proposition <ref> yields (<ref>) because 1≤ C_μ. Otherwise, ψ'(q_m)<8c_μ/(n+1) and C_μexp(-1/8nψ'(q_m))≥ 1 by choice of C_μ. Thus, (<ref>) clearly holds in that case, which completes the proof.
Our last estimate relies on the same idea as Proposition <ref>: if the maximal out-degree of a Galton-Watson tree is large, then the tree contains several independent disjoint copies of itself.
Let τ be a (μ)-tree where μ satisfies (<ref>). Recall ψ from (<ref>) and q_n from (<ref>). Recall from (<ref>) that Δ (τ) stands for the maximal out-degree of τ.
Then, for all integers n,m≥ 1 such that (Δ(τ)≥ n)>0, the following inequality holds true:
((τ) ≤ m | Δ(τ) ≥ n)≤ e^-nq_m.
Moreover, for all λ∈ (0, ∞), we also have
[e^-λ#τ | Δ(τ) ≥ n] ≤[ e^-λ#τ]^n .
On the event {(τ) ≤ m ; Δ (τ) ≥ n}, we decompose τ along the ancestral line of the ≤-first vertex u such that k_u (τ) ≥ n and, by (<ref>), we see ( θ_u ∗ (j)τ) ≤ m for all 1≤ j≤ n.
Hence,
_{(τ)≤ m ; Δ(τ)≥ n}≤∑_u∈τk_u(τ)≥ n(∏_ v∈τ: v<uk_v(τ)<n)(∏_j=1^n(θ_u*(j)τ)≤ m).
We take the expectation and apply the Many-To-One Principle (Proposition <ref>) `forwards and backwards' to get
((τ)≤ m ; Δ(τ)≥ n) ≤ [∑_u∈τk_u(τ)≥ n∏_ v∈τ: v<uk_v(τ)<n] ( (τ) ≤ m)^n
= ( Δ (τ) ≥ n) ( 1 - q_m )^n,
which entails (<ref>) since ln (1 - x) ≤ -x for all x∈ [0, 1). To prove (<ref>), we observe that
e^-λ#τ_{Δ(τ)≥ n}≤∑_u∈τk_u(τ)≥ n(
∏_ v∈τ: v<uk_v(τ)<n)(∏_j=1^n e^-λ#θ_u*(j)τ).
Then, as in the previous argument, we take the expectation and apply the Many-To-One Principle (Proposition <ref>) `forwards and backwards' to get the desired result.
§.§ Tail estimates.
In this section, we prove results on the tail of the Horton-Strahler number of a (μ)-tree. These results are expressed in terms of the functions Λ and Υ that are given by
<ref>∀ s∈ (0, 1), Υ(s)=∫_s^1 r/rlnΛ(r), where Λ(s)=sψ'(s)/sψ'(s)-ψ(s),
and where ψ is defined in (<ref>). Their basic properties are listed in the following lemma.
Let μ be a probability on ℕ that satisfies (<ref>). Then, the functions Λ and Υ in (<ref>) are well-defined,
Λ≥ 2, and Υ is continuous, positive, and decreasing on (0,1).
Proof. The inequality 1/sψ (s)≥1/2ψ'(s) from Lemma <ref> (ii) entails that Λ(s)≥ 2 for all s∈(0,1), and (iii) follows immediately.
Under the additional assumption that μ belongs to the domain of attraction of a stable law, the following lemma shows
that Λ varies slowly at 0^+.
Let us assume that μ satisfies (<ref>) and that it belongs to
the domain of attraction of a stable law of index α∈[1,2].
(i) If α∈ (1, 2], then lim_s→ 0^+Λ(s) = α/α-1 and Υ (s) ∼_0^+log_α/α-11/s.
(ii) Suppose that α = 1. Let L be such that μ ([n ,∞)) ∼
n^-1 L(n) and ℓ be the slowly varying function given by ℓ (x) = ∫_x^∞ y^-1 L (y) y for all x∈ (0, ∞) (see Proposition <ref>). Then,
Λ(s)∼_0^+ℓ(1/s)/L(1/s) and lim_s→0^+Λ(s)=∞.
Let us prove (i) first. By Proposition <ref>, ψ is regularly varying of index α at 0^+. Then, recall that ψ' is increasing, and since ψ (s) = ∫_0^s ψ'(x) x, the Monotone Density Theorem (recalled in Proposition <ref> (iv)) implies that sψ'(s)∼_0^+αψ (s). This entails lim_s→ 0^+Λ(s) =α/α-1 and thus Υ (s) ∼_0^+log_α/α-11/s.
This completes the proof of (i).
Let us assume that α = 1 to prove (ii). To simplify, we denote by ξ a random variable whose distribution is μ. Thus, x(ξ>x)∼_∞L(x). Proposition <ref> (ii) asserts that [ξ_{ξ>x }] ∼_∞ℓ (x) and ψ (s) ∼_0^+ s ℓ (1/s). The Monotone Density Theorem again asserts that sψ' (s) ∼_0^+ s ℓ (1/s).
We next consider sψ'(s) -ψ(s), s∈ (0,1) being fixed. To simplify notations, we set λ = -ln(1 - s)
and we first observe that
sψ'(s)-ψ(s) = [1-e^-λξ-sξ1-se^-λξ]=[∫_0^ξ( λ e^-λ x-s1-s e^-λ x+s xλ1-se^-λ x) x],
(by Fubini) = sλ1- s∫_0^∞(ξ>x) x e^-λ x x - ( s1-s -λ)∫_0^∞(ξ>x)e^-λ x x.
We estimate the second term of the right-hand side by writing
(s1-s+ln(1-s))∫_0^∞(ξ>x)e^-λ x x ∼_0^+ 12 s^2[ξ]=12s^2.
Next, Karamata's Abelian Theorem for Laplace transform (as recalled in Proposition <ref> (v)) asserts
λ∫_0^∞ x(ξ >x) e^-λ x x ∼_0^+ L(1/λ).
Since λ = -ln(1 - s), (<ref>) and (<ref>) entail that sψ'(s) -ψ (s) ∼_0^+ s L(1/s) thanks to Potter's bound (see Proposition <ref> (i)), which implies the desired estimate for Λ since sψ' (s) ∼_0^+ s ℓ (1/s). Finally, lim_s→∞Λ(s)=∞ comes from an application of Proposition <ref> (iii).
Let us assume that μ satisfies (<ref>) and that it belongs to
the domain of attraction of a stable law of index α∈[1,2]. Recall the definition of Υ from (<ref>).
Then,
Υ( ((τ) > n ) )∼ n.
In particular, if α∈ (1, 2], we get - ln((τ) > n)∼ nlnα/α-1.
Recall from (<ref>) that q_n = ((τ) > n ) and note that q_n→ 0. Lemma <ref> asserts that
q_n - q_n+1 = ψ (q_n)/ ψ'(q_n), namely q^-1_n+1 = q^-1_n Λ (q_n). To simplify, we set
Q_n = -ln q_n. Therefore,
Q_n+1=Q_n+lnΛ(e^-Q_n), n∈ℕ.
By Proposition <ref>, Λ varies slowly at 0^+. Thus, Potter's bound (recalled in Proposition <ref> (i)) applies to l (y)=Λ (1/y): for any ε∈ (0, 1/10), there is n_ε∈ℕ such that for all n≥ n_ε and for all x∈[Q_n,Q_n+1],
| lnΛ(e^-x)-lnΛ(e^-Q_n)| = |ln( Λ (e^-(x-Q_n) e^-Q_n)/Λ (e^-Q_n)) | ≤ε+ε ( x - Q_n)
≤ ε+εlnΛ(e^-Q_n).
by (<ref>) (we apply Proposition <ref> (i) with
l (y) = Λ (1/y), c= e^ε and λ = e^x-Q_n). This implies
| 1/lnΛ(e^-x) - 1/lnΛ(e^-Q_n)| ≤ ε/lnΛ(e^-Q_n)·1+ lnΛ(e^-Q_n)/lnΛ(e^-x)
≤ ε/lnΛ(e^-Q_n)·1+ lnΛ(e^-Q_n)/ (1-ε)lnΛ(e^-Q_n)-ε
≤ ε/lnΛ(e^-Q_n)·(ln 2)^-1+ 1/ 1-ε-ε (ln 2)^-1≤100ε/lnΛ(e^-Q_n)
because Λ≥ 2, by Lemma <ref>, and since ε∈ (0, 1/10). Therefore, by (<ref>), we get
∫_Q_n^Q_n+1| 1/lnΛ(e^-x) - 1/lnΛ(e^-Q_n)| x≤ 100 ε·Q_n+1 - Q_n/lnΛ(e^-Q_n)=100ε.
This implies that for all ε∈ (0, 1/10) and for all n≥ n_ε,
| ∑_n_ε≤ k< nQ_k+1 - Q_k/lnΛ(e^-Q_k) - ∫_Q_n_ε^Q_n x/lnΛ(e^-x)|= | n-n_ε- ∫_Q_n_ε^Q_n x/lnΛ(e^-x)| ≤ 100ε (n-n_ε).
Hence, lim_n→∞1/n∫_Q_0^Q_n x/lnΛ(e^-x) = 1,
which proves (<ref>) after the change of variable r=e^-x.
In the 1-stable cases, we provide below three examples of slowly varying functions L that may govern the tail of μ via the estimate nμ ([n , ∞)) ∼ L(n). We use the notations of Proposition <ref>.
(a) Let κ∈ (0, ∞). We consider the case where L(x)= (ln x)^-1-κ for x∈ (0, ∞). Indeed, this function varies slowly at ∞ and is such that ∫^∞ y^-1 L(y) y < ∞. Clearly, we have ℓ (x) = 1/κ (ln x)^-κ= 1/κ L(x) ln x. Therefore, Proposition <ref> (ii) yields
Λ (s) ∼_0^+1κln 1/s and Υ (s) ∼_0^+ln 1/s /lnln 1/s .
If one sets x_n = -ln((τ) > n), Proposition <ref> asserts that x_n/ln x_n∼ n, which implies that
-ln ((τ)>n) ∼ nln n.
(b) Let κ∈ (0, 1). We next consider the case where L(x)= exp (-(ln x)^κ) for x∈ (0, ∞). Again, this function varies slowly at ∞ and verifies ∫^∞ y^-1 L(y) y < ∞. An integration by parts gives
ℓ (x) = 1κ(ln x)^1-κ L(x) + 1-κκ∫_x^∞L(y)/y (ln y)^κ y∼_∞1κ(ln x)^1-κ L(x),
and Proposition <ref> (ii) thus implies that
Λ (s) ∼_0^+1κ( ln 1/s)^1-κ and Υ (s) ∼_0^+11-κ·ln 1/s /lnln 1/s .
If one sets x_n = -ln((τ) > n), Proposition <ref> asserts that x_n/ln x_n∼ (1 -κ ) n, which yields
-ln ((τ)>n) ∼ (1 -κ ) nln n.
(c) We finally consider the case where L(x)=exp(-ln x/lnln x) for all x∈ (e^e, ∞). This function still slowly varies and verifies ∫^∞ y^-1 L(y) y < ∞. We make the change of variable z=ln y/lnln y and then an integration by parts to compute
ℓ(x)∼_∞∫_ln x/lnln x^∞ e^-zln(z) z ∼_∞ [-e^-zln z]_ln x/lnln x^∞ ∼_∞ L(x) lnln x.
Therefore, Proposition <ref> (ii) implies that
Λ(s) ∼_0^+lnln 1/s and Υ (s) ∼_0^+ln 1/s/lnlnln 1/s.
If one sets x_n = -ln((τ) > n), Proposition <ref> asserts that x_n/ lnln x_n ∼ n,
which entails
-ln((τ)>n) ∼ n lnln n.
We use further the following estimates.
Let us assume that μ satisfies (<ref>) and that it belongs to
the domain of attraction of a stable law of index α∈[1,2]. Then,
for all λ∈ (0, ∞) and all κ∈ℝ, the following holds true:
lnln 1/ss→ 0^+=o(Υ(s)), Υ(λ sln^κ 1/s)∼_0^+Υ(s), and Υ(sΛ(s)^κ)∼_0^+Υ(s).
In particular, Υ is slowly varying at 0^+.
Recall that Λ is slowly varying at 0^+ from Proposition <ref> so lnΛ(s)=o(ln 1/s) by Proposition <ref> (i). This allows us to write
lnln 1/s∼_0^+∫_s^1/2 r/rln 1/rs→0^+=o(Υ(s)).
Next, without loss of generality, we may assume that λ∈ [1, ∞), so that 1≤λln^|κ| 1/s when s is small enough. Recall from Lemma <ref> that Λ≥ 2. Thus,
|Υ(λ s ln^κ 1/s)-Υ(s)|≤∫_s/λln^-|κ| 1/s^λ sln^|κ| 1/s r/rlnΛ(r)≤1/ln 2ln(λ^2 ln^2|κ| 1/s)s→ 0^+=o(Υ(s)),
which is the second estimate.
Then, observe that lim_s→ 0^+sΛ(s)^|κ|=0 because Λ slowly varies at 0^+. Another application of Proposition <ref> (i), together with Λ≥ 2, entails that
lnΛ(r)≥1/2lnΛ(s) for all small enough s and for all r∈(0,1) such that sΛ(s)^-|κ|≤ r≤ sΛ(s)^|κ|. Therefore,
|Υ(sΛ(s)^κ)-Υ(s)|≤∫_sΛ(s)^-|κ|^sΛ(s)^|κ| r/rlnΛ(r)≤2/lnΛ(s)ln(Λ(s)^2|κ|)=4|κ|s→ 0^+=o(Υ(s)).
which completes the proof.
§ PROOFS OF THEOREMS <REF>, <REF> AND <REF>
§.§ Proof of Theorem <ref>
In all this section, we assume that the critical offspring distribution μ belongs to the domain of attraction of
an α-stable law with α∈ (1,2], and more precisely, we assume that Proposition <ref> (a-d) hold. Let us set γ=lnαα-1. Recall from
Propositions <ref> (i) and <ref> that
γΥ(s)∼_0^+ln 1/s and -ln((τ)>n)∼γ n.
The proof of Theorem <ref> is separated into two parts: a lower bound and an upper bound.
*Lower bound.
We first prove for all ε∈ (0, 1) that
( αγ(τ) ≤ (1 - ε)ln n | #τ=n)n→∞- - -⟶ 0.
The idea is to apply Proposition <ref>, or rather Corollary <ref>, and
to switch the conditioning on the size of τ with the conditioning on the height of τ.
To that end, we use Proposition <ref> (i) to find
lim sup_η→0lim sup_n→∞( |τ| ≤ηna_n | #τ=n)=0.
Therefore, to prove (<ref>), we only need to prove for all η∈ (0, 1) that
lim sup_n→∞( |τ|> ηna_n ; αγ(τ) ≤ (1 -ε)ln n | #τ=n)=0.
We roughly bound the conditional probability by the ratio of the probabilities as follows.
( |τ| > ηna_n ; αγ(τ) ≤ (1 -ε)ln n | #τ = n )
≤( αγ(τ) ≤ (1 -ε)ln n | |τ| > ηna_n ) /(#τ = n) .
Since a_n∼ n^1/α+o(1) by Proposition <ref> and Potter's bound (see Proposition <ref> (i)), Proposition <ref> (iii) entails the following estimate for the denominator of the right-hand side of (<ref>):
(#τ = n )∼c_α/n a_n∼ n^-1-1/α+o(1).
To bound the right-hand side of (<ref>), we want to apply Corollary <ref> and to that end, we first control ψ' ( ( (τ) > 1-εαγln n )). By Potter's bound and Proposition <ref> (c), we get lnψ(s)∼_0^+αln s. Lemma <ref> (ii) then yields lnψ'(s)∼_0^+ (α-1)ln s. Together with (<ref>), this implies that
ψ' ( ( (τ) > 1-εαγln n ) )∼ n^-(1-ε) α -1/α +o(1).
We eventually apply Corollary <ref>, and then (<ref>) and (<ref>), to get that
( (τ) ≤1-εαγln n | |τ| >η na_n)≤ C_μexp( -η n8 a_n ψ'( ( (τ) > 1-εαγln n ) )) ≤exp( -n^εα-1/2α)
for all sufficiently large n. The previous upper bound combined with (<ref>) and (<ref>) implies (<ref>).
*Upper bound.
We want to prove for all ε∈ (0, 1) that
( αγ(τ) ≥ (1 + 2ε)ln n | #τ=n)n→∞- - -⟶ 0.
To that end, we first show
( αγ(τ) ≥ (1+ε)ln n | #τ≥ n) n→∞- - -⟶ 0.
Indeed, we use the following rough bound
( αγ(τ) ≥ (1+ε)ln n | #τ≥ n) ≤(αγ(τ)≥ (1+ε)ln n)/(#τ≥ n).
Then, by Propositions <ref> (iii) and <ref> on the one hand, and by (<ref>) on the other hand, we observe
(#τ≥ n) ∼c_α/a_n∼ n^-1/α+o(1) and (αγ(τ) ≥ (1 +ε)ln n)∼ n^-(1+ε) 1/α +o(1),
which implies (<ref>).
We then recall from (<ref>) that under ( · | #τ = n), τ and its mirror image have the same law. Therefore, (<ref>) in Lemma <ref> applies with m = ⌊_7/^8n ⌋ and one gets, for all sufficiently large n,
( (τ) ≥1 + 2εαγln n | #τ=n) ≤
2( (R_⌊_7/^8n ⌋τ) ≥1 + εαγln n | #τ=n)+
( |τ| > 12n | #τ=n).
By Proposition <ref> (i), ( |τ| >12n | #τ=n) → 0 when n→∞. To control the first term of the right-hand side of the previous inequality, we use Proposition <ref> (ii): to that end, recall from Proposition <ref> (ii) that R_⌊_7/^8n ⌋τ is a measurable function of the Lukasiewicz path
(W_k(τ))_0≤ k ≤⌊_7/^8n ⌋. Therefore, for all n∈ and c∈(0,∞),
x_n := ( (R_⌊_7/^8n ⌋τ) ≥1 + εαγln n | #τ=n)
= [ _{(R_⌊_7/^8n ⌋τ) ≥1 + εαγln n } D^_(7/8)_n ( W_⌊_7/^8n ⌋ (τ)) | #τ≥ n ]
≤ [ _{(τ) ≥1 + εαγln n } D^_(7/8)_n ( W_⌊_7/^8n ⌋ (τ)) | #τ≥ n ]
≤ c ( (τ) ≥1 + εαγln n | #τ≥ n ) + [ _{
D^_(7/8)_n ( W_⌊_7/^8n ⌋ (τ)) ≥ c } D^_(7/8)_n( W_⌊_7/^8n ⌋ (τ)) | #τ≥ n ]
by (<ref>). We make n→∞ then c→∞, and thus, by (<ref>) and (<ref>) in Proposition <ref>, we get lim sup_n→∞ x_n=0. This implies (<ref>) and readily completes the proof of Theorem <ref>.
§.§ Proof of Theorem <ref>
In all this section, we assume that the critical offspring distribution μ belongs to the domain of attraction of a 1-stable law, and more precisely, we assume that Proposition <ref> (a-c) hold. It is convenient to set
∀ x∈ [0, ∞), q_x = ( (τ) ≥ x ).
Recall from (<ref>) that Δ (τ) stands for the maximal out-degree of τ, which is the relevant quantity to consider in the 1-stable cases. Note that (Δ(τ)≥ n)≥μ([n,∞))>0 for all n∈. We first prove the following estimates.
Under the same assumption and notations as in Theorem <ref>, the following holds true for all ε∈ (0, 1) and all κ∈ (0, ∞).
(i) There is n_0∈ that depends on ε and κ such that for all integers n ≥ n_0,
b_n q_(1+ ε) Υ(1/b_n)≤ (ln n)^-κ and b_n q_(1- ε) Υ(1/b_n)≥ (ln n)^κ.
(ii) Recall from (<ref>) the definition of Λ. Then, lim_n→∞ b_n Λ (1/b_n) q_(1+ ε) Υ(1/b_n) = 0.
(iii) It holds lim_n→∞ n^κ ((τ) ≤ (1 -ε) Υ(1/b_n) | Δ (τ) > 12 b_n ) = 0.
Proof. Let us prove (i). By Proposition <ref> then Proposition <ref>, we have
Υ( q_(1±ε) Υ(1/b_n)) ∼ (1±ε) Υ(1b_n) ∼ (1±ε) Υ( (ln b_n)^∓κb_n).
Then, for all sufficiently large n, Υ( q_(1+ ε) Υ(1/b_n)) >Υ( (ln b_n)^- κb_n) and Υ( q_(1 - ε) Υ(1/b_n)) <Υ( (ln b_n)^κb_n), which implies that
b_n q_(1+ ε) Υ(1/b_n)≤ (ln b_n)^-κ and b_n q_(1- ε) Υ(1/b_n)≥ (ln b_n)^κ
because Υ decreases (see Lemma <ref>). By Proposition <ref> (iii),
b_n varies regularly with index 1 and by Proposition <ref> (i), ln b_n ∼ln n, which implies the desired result.
We use similar arguments to prove (ii): by Proposition <ref> then Proposition <ref>, we get
Υ( q_(1+ ε) Υ(1/b_n)) ∼
(1+ ε) Υ(1b_n) ∼ (1+ ε)
Υ( 1b_nΛ(1b_n)^-1-κ),
so Υ( q_(1+ ε) Υ(1/b_n)) >Υ( 1b_nΛ(1b_n)^-1-κ) for all sufficiently large n, which implies that
b_n Λ(1b_n) q_(1+ ε) Υ(1/b_n)≤Λ(1b_n)^-κ .
This implies (ii) since lim_x→∞Λ (1/x) = ∞ by Proposition <ref> (ii).
To prove (iii), we first use (i) with e.g. κ = 2. Then, Proposition <ref> implies that
n^κ( (τ) ≤ (1 -ε) Υ(1b_n) | Δ (τ) > 12 b_n ) ≤ n^κexp( -12 b_n q_(1- ε) Υ(1/b_n))
≤ n^κexp( -12 (ln n)^2 )
for all sufficiently large n, which entails the desired result.
The proof of Theorem <ref> is cut into two parts: firstly an upper bound and
secondly a lower bound.
*Upper bound.
We first prove for all ε∈ (0, 1) that
( (τ) ≥ (1 + ε) Υ(1b_n) | #τ≥ n)n→∞- - -⟶ 0.
To that end, we use a direct upper bound, then we combine Lemma <ref> with Proposition <ref> (ii):
( (τ) ≥ (1 +ε) Υ(1b_n) | #τ≥ n) ≤q_(1+ ε) Υ (1/b_n)/ (#τ≥ n)∼Λ(1b_n) b_n q_(1+ ε) Υ (1/b_n).
Next, we use Lemma <ref> (ii) to conclude.
*Lower bound.
We next prove for all ε∈ (0, 1) that
( (τ) ≤ (1 - ε) Υ(1b_n) | #τ≥ n)n→∞- - -⟶ 0.
To that end, we first prove that
lim_n→∞( #τ < n | Δ(τ) ≥ 2 b_n ) n→∞---⟶ 0 .
Indeed, by (<ref>) in Proposition <ref> combined with Markov inequality, for all λ∈ (0, ∞), it holds
( #τ < n | Δ(τ) ≥ 2 b_n ) ≤ e^λ[ e^-λ/n#τ]^⌊ 2 b_n⌋.
Recall from Proposition <ref> (iv) that [ exp( -λ/n#τ) ]^⌊ 2 b_n⌋ = [ exp( -λ/n𝙷_⌊ 2b_n⌋) ]. The convergence (<ref>) with x = 2 then implies that lim_n→∞[ e^-λ/n#τ]^⌊ 2 b_n⌋ = e^-2λ. Thus,
lim sup_n→∞( #τ < n | Δ(τ) ≥ 2b_n ) ≤ e^-λλ→∞---⟶ 0
which entails (<ref>).
We next prove that for all c ∈ (0, ∞),
lim sup_n→∞(Δ(τ) ≥ cb_n)/(#τ≥ n) ≤4/c.
Indeed, we first prove the case where c = 2. Observe that
u(n):= (Δ(τ) ≥ 2b_n)/(#τ≥ n) = (Δ(τ) ≥ 2b_n | #τ≥ n ) + u(n) ( #τ < n | Δ(τ) ≥ 2 b_n ) .
By (<ref>), for all sufficiently large n, we get ( #τ < n | Δ(τ)≥ 2 b_n ) ≤1/2. It therefore implies u(n) ≤ 2 (Δ(τ)≥ 2b_n | #τ≥ n )→ 2 (J ≥ 2) = 1 by (<ref>) in Proposition <ref>. This proves (<ref>) when c = 2. Then, recall from Proposition <ref> (iii) that (b_n) is 1-regularly varying. Then for all sufficiently large n, we get cb_n ≥ 2b_⌊_1/^4cn ⌋ and thus
(Δ(τ) ≥ c b_n)/(#τ≥ n)≤(Δ(τ) ≥ 2b_^⌊_1/^4cn ⌋)/(#τ≥ n) = u(⌊_1/^4cn⌋)
(#τ≥⌊_1/^4 cn⌋)/(#τ≥ n) .
This implies the desired result since we know from Lemma <ref> that the sequence n↦ (#τ≥ n) varies regularly with exponent -1,
and since lim sup_n→∞ u(⌊__1/^^4cn⌋) ≤ 1 as proved above.
We complete the proof of (<ref>) as follows.
v_n := ((τ) ≤ (1 -ε)Υ( 1b_n) | #τ≥ n)
≤ ( (τ) ≤ (1 -ε)Υ( 1b_n) ; Δ(τ) > 12
b_n)/ ( #τ≥ n) + ( Δ(τ) ≤12 b_n | #τ≥ n)
≤ ( Δ(τ) > 12 b_n)/ ( #τ≥ n) ( (τ) ≤ (1 -ε)Υ( 1b_n) | Δ(τ) > 12 b_n) + ( Δ(τ) ≤12 b_n | #τ≥ n) .
This first term of the above right-hand side converges to 0 by (<ref>) with c=1/2 and by Lemma <ref> (iii). Moreover, (<ref>) in Proposition <ref> asserts that lim_n→∞( Δ(τ) ≤12 b_n | #τ≥ n)=0. This completes the proof of (<ref>), and of Theorem <ref>.
§.§ Proof of Theorem <ref>
Throughout this section, we assume there exists a function L that varies slowly at ∞ such that μ (n) ∼ n^-2 L(n).
This implies that μ ([n, ∞)) ∼ n^-1 L(n) so, by Proposition <ref>, μ
belongs to the domain of attraction of a 1-stable law.
The proof of Theorem <ref> is separated into two parts: firstly a lower bound and
secondly an upper bound.
*Lower bound.
We prove for all ε∈ (0, 1) that
( (τ) ≤ (1 -ε) Υ(1b_n) | #τ = n)n→∞- - -⟶ 0.
Indeed, observe that
v_n := ((τ) ≤ (1 -ε)Υ( 1b_n) | #τ = n)
≤ ( (τ) ≤ (1 -ε)Υ( 1b_n) ; Δ(τ) > 12
b_n)/ ( #τ = n) + ( Δ(τ) ≤12 b_n | #τ = n)
≤ ( Δ(τ) >12 b_n)/ n ( #τ = n) n ( (τ) ≤ (1 -ε)Υ( 1b_n) | Δ(τ) > 12 b_n) + ( Δ(τ) ≤12 b_n | #τ = n) .
Then recall from (<ref>) in Lemma <ref> that n ( #τ = n)∼ ( #τ≥ n). Thus, by (<ref>), we get
lim sup_n→∞( Δ(τ) > 12 b_n)/ n ( #τ = n) ≤ 8.
By Lemma <ref> (iii), lim_n→∞n ( (τ) ≤ (1 - ε)Υ( 1b_n) | Δ(τ) >12 b_n) = 0. Then, (<ref>) in Proposition <ref> applies and asserts that lim_n→∞( Δ(τ) ≤12 b_n | #τ = n) = 0, which finally implies (<ref>).
*Upper bound.
We finally show for all ε∈ (0, 1) that
( (τ) ≥ (1 +ε) Υ(1b_n) | #τ = n)n→∞- - -⟶ 0.
To prove (<ref>), we use the result of Kortchemski & Richier <cit.> that is recalled in Proposition <ref> and that shows that the law of the Lukasiewicz path (W_j(τ))_0≤ j ≤ n of τ under ( · | #τ = n) is close in total variation distance to the law of (Z^(n)_j)_0≤ j ≤ n as defined in (<ref>).
First, let us briefly recall the definition of Z^(n): let (W_n)_n∈ℕ be a left-continuous random walk starting at 0 and whose jump distribution is given by (<ref>). We recall from (<ref>) the two notations I_n = - min_ 0≤ j ≤ n-1 W_j and σ_n = inf{0≤ k≤ n - 1 : W_k = - I_n }. Then,
Z_j^(n)= W_σ_n +j + I_n if 0≤ j < n -σ_n and Z_j^(n)= I_n - 1 + W_j-(n-σ_n) if n ≥ j ≥ n -σ_n.
Next, we interpret Proposition <ref> in terms of trees, i.e. we view Z^(n) as the Lukasiewicz path of a random tree τ^(n). Namely, observe that Z_0^(n) = 0, Z_n^(n) = -1, and the other values of Z^(n) are nonnegative. Plus, maybe except at the cutting time n -σ_n - 1 when Z^(n)_n-σ_n - Z^(n)_n-σ_n-1 = -1 - W_n-1, the jumps of Z^(n) are larger or equal to - 1. Consequently, if W_n-1≤ 0 then Proposition <ref> (i) applies and Z^(n) is the Lukasiewicz path associated with a tree τ^(n). If W_n-1 > 0 then we take τ^(n) equal to the star-tree with n - 1 leaves, which we denote by _n (or any tree with n vertices).
If W_n-1≤ 0 then (W_j(τ^(n)))_0≤ j ≤ n = (Z^(n)_j)_0≤ j ≤ n and
if W_n-1 > 0 then τ^(n) = _n.
Moreover, observe that Proposition <ref> (c) easily implies that ( τ^(n) = _n ) = (W_n-1 > 0)→ 0.
Thus, thanks to Proposition <ref>, in order to prove (<ref>), we only need to show
( τ^(n)≠_n ; (τ^(n)) ≥ (1 +ε) Υ(1b_n))n→∞- - -⟶ 0.
To that end, we discuss a decomposition of τ^(n) at the `cutting time' n -σ_n - 1 and
we recall from (<ref>) in Proposition <ref> the existence of an i.d.d. sequence (τ_p)_p∈ℕ of (μ)-trees given by
∀ p∈ℕ, (p+W_𝙷_p +j )_0≤ j≤𝙷_p+1-𝙷_p = ( W_j (τ_p) )_0≤ j ≤#τ_p,
where 𝙷_p = inf{j ∈ℕ: W_j = -p}. Let us denote by u^(n)_0 = ∅ < u^(n)_1 < … < u^(n)_n-1 the vertices of τ^(n) listed in lexicogaphic order. From the definition of Z^(n) and thanks to Proposition <ref> (ii), we first derive that if W_n-1≤ 0 then
R_n-σ_n -1( τ^(n))= R_n-σ_n -1( τ_I_n) and k_u^(n)_n-σ_n -1 (τ^(n) )= -W_n-1,
where we recall from (<ref>) in Proposition <ref> that R_m (t)=R_m t stands for the tree consisting of the first m+1 vertices of t taken in lexicographic order. We now look at the subtrees that are grafted either at u^(n)_n-σ_n-1 or to the right of its ancestral line: namely, the subtrees that are grafted at the following set of vertices
B = { v ∈τ^(n) : v≼ u^(n)_n-σ_n-1 and
v > u^(n)_n-σ_n-1}.
If we denote by v(0) < v(1) < … < v(#B-1) the vertices of B listed in lexicographic order, then by definition of Z^(n) and by definition of the trees τ_p as recalled above, we get that if W_n-1≤ 0 then
θ_v(p)τ^(n) = τ_p , 0 ≤ p ≤ I_n-1=#B-1.
To prove (<ref>), we then use (<ref>) in
Lemma <ref> (i) that asserts on the event { W_n-1≤ 0 } that
( τ^(n)) ≤ 1 +max( ( R_n-σ_n -1( τ^(n))), max_v∈ B( θ_v τ^(n)) ).
Then, the identities (<ref>) and (<ref>), together with the monotony property (<ref>) of , yield that
( τ^(n)) ≤ 1+ max_0≤ p ≤ I_n(τ_p ).
Therefore, we obtain the following:
x_n := ( τ^(n)≠_n ; (τ^(n)) ≥ (1 +ε) Υ(1b_n))
≤ ( max_ 0≤ p ≤⌊ 2b_n ⌋(τ_p ) ≥ (1 +ε) Υ(1b_n) - 1)+ (I_n > 2b_n )
≤ (2b_n+1) ( (τ) ≥ (1 +ε) Υ(1b_n) - 1 ) + ( 1n𝙷_⌊ 2b_n ⌋≤ 1 ),
since the τ_p are (μ)-trees. Then, recall from (<ref>) that 1n𝙷_⌊ 2b_n ⌋→ 2 in probability, which implies that ( 1n𝙷_⌊ 2b_n ⌋≤ 1 ) → 0. Then,
Lemma <ref> (i) asserts that b_n ( (τ) ≥ (1 +ε) Υ(1b_n) - 1 ) → 0, which finally implies (<ref>) and which completes the proof of Theorem <ref>.
We conclude this article by providing an asymptotic equivalent of the rescaling sequence Υ(1/b_n), that gives the order of (τ) under ( · | #τ=n), in the three cases considered in Example <ref>.
We recall from Proposition <ref> (iii) that b_n varies regularly with index 1, which implies ln b_n ∼ln n by Proposition <ref> (i).
(a) Let κ∈ (0, ∞). We consider the case where L(x) = (ln x)^-1-κ for all x ∈ (0, ∞).
Recall from Example <ref> that Υ (s) ∼_0^+ (ln1s)/(lnln1s). Therefore, we easily get
Υ(1b_n) ∼ln n/lnln n .
(b) Let κ∈ (0, 1). We next consider the case where L(x)= exp (-(ln x)^κ) for all x ∈ (0, ∞).
Recall from Example <ref> that Υ (s) ∼_0^+11-κ(ln1s )/(lnln1s ). Therefore, we easily get
Υ(1b_n) ∼1/1 -κ·ln n/lnln n.
(c) Let us finally consider the case where L(x) = exp(-ln x/lnln x) for all x∈ (e^e, ∞).
Recall from Example <ref> that Υ (s) ∼_0^+ (ln1/s) / (lnlnln1/s).
Therefore, we easily get
Υ(1b_n) ∼ln n/lnlnln n.
*Acknowledgements
I am deeply indebted to my Ph.D. advisor Thomas Duquesne for introducing me to the Horton-Strahler number, for suggesting me several problems about it, and for many insightful discussions. I am especially grateful for his help to prove Proposition <ref> and to improve the quality of this paper. I warmly thank Guillaume Boutoille et Yoan Tardy for a stimulating discussion about the correct order of magnitude that should appear in the Cauchy regime. Many thanks are due to Quentin Berger for some feedback about the proof of the estimate (<ref>). I also thank Igor Kortchemski for some precisions about his work.
plain
|
http://arxiv.org/abs/2307.06316v1 | 20230712172644 | Anisotropic in-plane heat transport of Kitaev magnet Na$_2$Co$_2$TeO$_6$ | [
"Shuangkui Guang",
"Na Li",
"Qing Huang",
"Ke Xia",
"Yiyan Wang",
"Hui Liang",
"Yan Sun",
"Qiuju Li",
"Xia Zhao",
"Rui Leonard Luo",
"Gang Chen",
"Haidong Zhou",
"Xuefeng Sun"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Department of Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
Institute of Physical Science and Information Technology, Anhui University, Hefei, Anhui 230601, People's Republic of China
Department of Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
Department of Physics and Astronomy, University of Tennessee, Knoxville, Tennessee 37996-1200, USA
Department of Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
Institute of Physical Science and Information Technology, Anhui University, Hefei, Anhui 230601, People's Republic of China
Institute of Physical Science and Information Technology, Anhui University, Hefei, Anhui 230601, People's Republic of China
Institute of Physical Science and Information Technology, Anhui University, Hefei, Anhui 230601, People's Republic of China
School of Physics and Optoelectronics, Anhui University, Hefei, Anhui 230061, People's Republic of China
School of Physics Sciences, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
Department of Physics and HKU-UCAS Joint Institute for Theoretical and Computational Physics at Hong Kong, The University of Hong Kong, Hong Kong, People's Republic of China
[email protected]
Department of Physics and HKU-UCAS Joint Institute for Theoretical and Computational Physics at Hong Kong, The University of Hong Kong, Hong Kong, People's Republic of China
[email protected]
Department of Physics and Astronomy, University of Tennessee, Knoxville, Tennessee 37996-1200, USA
[email protected]
Institute of Physical Science and Information Technology, Anhui University, Hefei, Anhui 230601, People's Republic of China
Department of Physics, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, Jiangsu 210093, People's Republic of China
We report a study on low-temperature heat transport of Kitaev magnet Na_2Co_2TeO_6, with the heat current and magnetic fields along the honeycomb spin layer (the ab plane). The zero-field thermal conductivity of κ^a_xx and κ^a*_xx display similar temperature dependence and small difference in their magnitudes; whereas, their magnetic field (parallel to the heat current) dependence are quite different and are related to the field-induced magnetic transitions. The κ^a_xx(B) data for B ∥ a at very low temperatures have an anomaly at 10.25–10.5 T, which reveals an unexplored magnetic transition. The planar thermal Hall conductivity κ^a_xy and κ^a*_xy show very weak signals at low fields and rather large values with sign change at high fields. This may point to a possible magnetic structure transition or the change of the magnon band topology that induces a radical change of magnon Berry curvature distribution before entering the spin polarized state. These results put clear constraints on the high-field phase and the theoretical models for Na_2Co_2TeO_6.
Anisotropic in-plane heat transport of Kitaev magnet Na_2Co_2TeO_6
Xuefeng Sun
August 12, 2023
==================================================================
§ INTRODUCTION
Kitaev model is an exactly solvable spin model that supports Z_2 spin liquid with gapless Majorana excitations <cit.>. In the presence of magnetic fields, the Kitaev model can further support gapped spin liquid with gapless and chiral Majorana mode that gives rise to half-quantized thermal Hall conductance. The presence of Kitaev interaction in the spin-orbit-coupled J=1/2 honeycomb magnets with 5d iridium ions (Ir^4+) <cit.> and 4d ruthenium ions (Ru^4+) was suggested later <cit.>. Unfortunately, most of the relevant honeycomb irridates and even α-RuCl_3 are well ordered at low temperatures <cit.>. α-RuCl_3 in the magnetic field seems to support the half-quantized thermal Hall transport, that may be compatible with the gapped Kitaev spin liquid <cit.>. Nevertheless, the actual ground state of α-RuCl_3 in magnetic fields still needs further scrutiny. On the material's side, there were some efforts that proposed candidate Kitaev materials beyond the 4d/5d contexts, and these include the 4f rare-earth Kitaev magnets and the 3d transition metal Kitaev magnets. Along the line of the 3d transition metal Kitaev magnets, several honeycomb cobalt compounds such as Na_2Co_2TeO_6, Na_3Co_2SbO_6, and BaCo2(AsO_4)_2 were proposed and studied <cit.>.
The question about the Kitaev interactions in these Co-based materials, especially for BaCo_2(AsO_4)_2, has been raised in Refs. Halloran, Das. It was found that the experimental results based on neutron scattering in BaCo_2(AsO_4)_2 can be consistently accounted for by the XXZ model on the first and third neighbors but not by the more anisotropic spin model with Kitaev interactions and other pseudo-dipole interactions. It is then further remarked that, the frustration in BaCo_2(AsO_4)_2 is mainly from the competing first and third interaction, rather than from the Kitaev-related anisotropic interaction. Likewise, one could raise the same question for the honeycomb cobalt Na_2Co_2TeO_6 since no consensus has been reached concerning the microscopic models <cit.>. Here with our comprehensive thermal transport measurements, we are able to address this question, and the experimental results point to more anisotropic spin interactions.
In this work, we study the in-plane thermal conductivity and thermal Hall conductivity of Na_2Co_2TeO_6 with both the heat current and magnetic field along the a or a* axis. The magnetic field dependence of both thermal conductivity and thermal Hall conductivity display in-plane anisotropic behaviors. This is a strong indication of the intrinsic anisotropic nature of the spin interactions as a Kitaev material. In addition, the low-temperature κ^a_xx(B) with a-axis field exhibits an anomaly at 10.25–10.5 T that is argued to be associated with an unexplored magnetic structure transition. We further provide a physical reasoning about the Berry curvature properties based on the contents of κ_xx and κ_xy. This way of reasoning may be well generalized to other quantum magnets with the cooperative responses in κ_xx and κ_xy with respect to the external magnetic fields.
§ EXPERIMENTS
High-quality single crystals of Na_2Co_2TeO_6 were grown by a flux method as previous reported <cit.>. Two thin-plate shaped crystals with size of 6.30 × 2.17 × 0.053 mm^3 and 5.20 × 2.10 × 0.047 mm^3 were used for heat transport measurements. The heat current and magnetic field were applied along the longest dimension of these two samples, which is the a-axis and a*-axis direction, respectively. The longitudinal thermal conductivity and thermal Hall conductivity were measured simultaneously by using the standard steady-state technique with “one heater, three thermometers" <cit.>. Heat current and magnetic field were applied along the a or a* axis. The longitudinal and transverse temperature gradients were measured by three in-situ calibrated RuO_2 thermometers. The measurements were carried out in a ^3He refrigerator equipped with a 14 T magnet.
§ RESULTS AND DISCUSSIONS
Figure 1 shows the temperature dependence of the longitudinal thermal conductivity κ^a_xx and κ^a*_xx, measured in zero field and with heat current along the a axis and the a* axis, respectively. Na_2Co_2TeO_6 is known to have a zigzag <cit.> or triple-q <cit.> antiferromagnetic (AF) order below the Néel temperature (T_N ≈ 27 K), followed by two possible spin re-orientation transitions around 16 K and 6 K. Though the κ_xx(T) data shows no obvious anomaly around 27 K and 16 K, similar to the reported results <cit.>, a slope change below 6 K can be related to the spin re-orientation. At subkelvin temperatures, the κ_xx(T) exhibits a roughly T^1.2 behavior that is significantly weaker than the expected T^3 behavior of phonons and indicates the existence of magnetic scattering of phonons. Assuming the purely phononic contribution to thermal conductivity, the phonon mean free path can be estimated. As shown in the inset to Fig. 1, the ratios of the phonon mean free path to the averaged sample width are much smaller than 1 even at T < 1 K, indicating the existence of microscopic phonon scattering effect at such low temperatures. It is notable that at zero field the κ^a_xx and κ^a*_xx have similar temperature dependence with slight difference in their magnitudes; whereas, they exhibit rather different magnetic field dependence for the in-plane fields.
Figures 2(a-d) show the magnetic field dependence of κ^a*_xx with the heat current and magnetic field along the a* axis at several selected temperatures from 0.375 to 3.2 K. It is found that κ^a*_xx(B) shows a sharp jump at B_a*1∼ 6.25 T with increasing field , and subsequently exhibits two minima at B_a*2∼ 7.5 T and B_a*3∼ 10 T before the saturation. According to the previous magnetization results <cit.>, the B_a*1 corresponds to a first-order magnetic transition and shifts to lower field with increasing temperature. Meanwhile, a large hysteresis occurs in κ^a*_xx(B) at B < B_a*1. The rather sharp minima of κ_a*(B) at B_a*2 and B_a*3 indicate two magnetic transitions, which is compatible with the recent inelastic neutron scattering measurements that revealed a field-induced intermediate magnetic state with partially polarized spins between 7.5 T and 10 T with magnetic field along the a* axis <cit.>. All these results are consistent with our previous data measured at lower temperatures down to millikelvin temperatures <cit.>.
Figures 2(e-h) show the planar thermal Hall conductivity κ^a*_xy as a function of magnetic field at above selected temperatures. It should be pointed out that since the planar thermal Hall effect of Na_2Co_2TeO_6 was found to be rather weak, each data point displayed in these figures was the averaged value of data measured for more than five times. For all selected temperatures, there is no discernible κ^a*_xy signal at B < B_a*1 and some sizeable thermal Hall signal appears at higher magnetic fields. At high temperatures of 2.25 K and 3.2 K, there is almost no signal between B_a*1 and B_a*3, but a sign change or weak drop appears at B_a*3 and there is a negative signal at B > B_a*3. Whereas, at lower temperatures of 0.375 K and 0.78 K, there is no discernable signal between B_a*1 and B_a*2, and a drop occurs at B_a*2 to reach a negative signal, followed by a sign change to positive signal before arriving at B_a*3; at B > B_a*3, there is the same behavior as that at high temperatures. Moreover, an obvious feature was observed in the high-field region above B_a*3. That is, the κ^a*_xy signal first shows a peak with positive signal and then changes the sign at higher fields. This high-field feature is shared by the data at all selected temperatures, indicating an intrinsic narrow region with the positive κ_xy before entering the high-field fully polarized state.
The planar thermal Hall conductivity and thermal conductivity was recently studied at higher temperature region of 2–30 K by Takeda et al. <cit.>. It is interesting to compare their results and ours. First, our κ^a*_xx(B) data display much sharper minima at B_a*2 and B_a*3, indicating better sample quality. Second, although their low-field κ^a*_xy(B) data are also almost zero, they show a sharp minimum with negative values at B_a*1. This feature is absent in our results. Third, their high-field κ^a*_xy(B) data at 2 and 5 K also exhibits rather complicated behavior with sign changes, which is similar to ours.
To clarify the most significant difference in the κ^a*_xy(B) data at B_a*1 between two groups, we took very careful checks on the measurements and noticed that it may be related to the rather long relaxation time of temperature gradients at this critical field. For this checking measurement, particularly at B_a*1 and at 0.78 K, the temperature gradients were measured with different waiting times after applying the heat current. At this temperature and in 0–14 T fields, usually several minutes are enough to establish the equilibrium heat flowing state. However, at B_a*1 we found that the relaxation time is much longer. First, we carried out a set of measurement with long waiting time. Figure 3(a) shows the κ^a*_xx(B) data for +B and -B fields and Fig. 3(e) shows the corresponding κ^a*_xx(B) data, for which all the temperature gradients were recorded with long enough waiting time (> 30 minutes) after applying the heat current. Figures 3(b) and 3(f) show the longitudinal and transverse temperature gradients, which are stable enough and only weakly fluctuating with flat based lines. In this case, the κ^a*_xx data at -6.5 T (several times measurements) are indistinguishable from that at 6.5 T; accordingly, the obtained κ^a*_xy data at 6.5 T are small and the κ^a*_xy(B) curve does not show any anomaly at this critical field. Second, at -6.5 T the temperature gradients were recorded with usual waiting time (several minutes) after applying the heat current, as shown in Fig. 3(d) and 3(h). In this case, the temperature gradients are not stable enough although the shifts of temperature gradients are very weak. Accordingly, the κ^a*_xx data at -6.5 T are slightly smaller than that at 6.5 T and the obtained κ^a*_xy data show rather large negative values, as shown in Fig. 3(c) and 3(g). The later measurements actually are similar to the dip-like feature at B_a*1 observed by Takeda et al. <cit.>. Thus, it can be concluded that this dip-like feature is not intrinsic.
The planar thermal Hall conductivity and thermal conductivity were also measured with magnetic field and heat current along the a axis. As shown in Figs. 4(a-d), the longitudinal thermal conductivity κ^a_xx(B) exhibit the similar behaviors to the previous ultralow temperature data <cit.>. With increasing field, the κ^a_xx firstly increases and arrives a maximum at ∼ 7.5 T, and then decreases and reaches a minimum at B_a1∼ 9.75 T, which is close to the spin polarization transition for B ∥ a. Above B_a1, the κ^a_xx quickly increases and finally saturates in the polarized state. However, another kink-like anomaly appears in κ^a_xx(B) at 10.25–10.5 T, which is marked as B_a2. This anomaly is likely due to a unexplored field-induced magnetic transition. Since this anomaly becomes indistinguishable at temperatures above 2.25 K, it cannot be probed by eralier magnetization studies <cit.>. In addition, there is an obvious hysteresis between the increasing and decreasing field data for B < B_a1. The unusual hysteresis behavior for B < B_a1 may be related to the magnetic domains which can scatter phonons in Na_2Co_2TeO_6.
Figures 4(e-h) show the planar thermal Hall conductivity κ^a_xy as a function of magnetic field at the selected temperatures. The data at B < B_a1 are rather similar at all temperatures, that is, there is weak positive signal at low fields and negative signal at the intermediate fields. A sign change of κ^a_xy seems to occur at the peak field of κ^a_xx(B). A peak-like behavior appears for B_a1 < B < B_a2 at low temperatures and disappears at 2.25 and 3.2 K. Rather large signal was observed for B > B_a2: there is large positive signal accompanied with sign change at low temperatures, which evolutes into a large negative dip at 3.2 K. This high-field feature are rather similar between the present data and Takeda et al.'s data at low temperatures <cit.>. However, their data at high temperatures show positive signal for B > B_a2.
One of our main experimental results is that both the a*- and a-axis magnetic fields can induce planar magnetic Hall effect in Na_2Co_2TeO_6 at low temperatures. This essentially reproduces the previous results from Takeda et al. <cit.>, which were explained to be originated from the topological magnons. However, there are some obvious difference in the κ_xy(B) between two groups' results. First, Takeda et al.'s results indicated that the planar Hall effect is closely correlated to the field-induced magnetic transitions. In contrast, our data indicate that the planar thermal Hall conductivity does not show up at the magnetic transition fields for both B ∥ a* and B ∥ a. In particular, the most striking feature in Takeda et al.'s data is the sharp dip of κ^a*_xy(B) data at B_a*1, which however was proved to be artificial by our careful checking on the relaxation time. This kind of difference suggests that the origin of the planar thermal Hall effect in Na_2Co_2TeO_6 needs further experimental investigations. Second, although both groups' data confirmed the quite large planar thermal Hall conductivity at high fields, there exist rather clear difference in the details like the sign change of κ_xy(B). Another main result is that our thermal conductivity data κ_xx(B) display much sharper anomalies at the field-induced magnetic transitions, which allows us to observe an additional kink-like anomaly in the κ_xx(B) curves with B ∥ a. This indicates that there is likely another unexplored field-induced magnetic transition at B_a2. Therefore, the magnetic phase diagram of Na_2Co_2TeO_6 may be more complicated than what people have known.
Here we give more discussion about the indication of the experimental results of κ_xx and κ_xy for the fields along different in-plane directions. Unlike the κ_xx result, the κ_xy reveals the wavefunction properties or Berry curvature properties of the relevant heat carriers, such as the magnetic excitations and/or the phonon modes. While the κ_xy is also sensitive to the density of carriers' states and their mutual scattering like κ_xx, the Berry curvature properties can bring extra features such as the large change in the magnitude and the sign reversal to the κ_xy. This phenomenon occurs when the bands of the excitations experience radical changes in their Berry curvature distributions. For example, the excited magnon bands touch and reopen with a significant change of their Chern numbers. Actually, this change does not have to be associated with a magnetic structure transition. The magnetic structure can be smoothly varied by the external magnetic field without experiencing any phase transition while the change of magnon band structure topology can occur with a large change in κ_xy <cit.>. Thus, the κ_xy provides more information about the properties of the excitations in the system. In Figs. 2 and 4, as we have previously mentioned, at the magnetic transitions that are indicated in the κ_xx, there are the associated changes in the κ_xy if the signal is discernible. Nevertheless, the κ_xy still experience sign reversals or large magnitude change at the magnetic fields where there are no obvious signatures in the κ_xx. This is clearly observed in Fig. 2 for B > B_a*3 and in Fig. 4 for B > B_a2. Thus, a radical change of the magnon band structure topology and the associated Berry curvature distribution may be experienced before the system enters a full polarized state. In fact, some recent theoretical researches have demonstrated that for the in-plane magnetic fields the thermal Hall conductivity of Kitaev magnets arises from the topological magnons with finite Chern numbers and a peculiar sign structure follows from the symmetries of the momentum space Berry curvature <cit.>. This scenario is compatible with our reasoning above, and may be adopted to describe the planar thermal Hall effect of Na_2Co_2TeO_6, particularly the pronounced κ_xy(B) in high-field region.
In addition to the properties about the magnetic excitations and magnetic structures in the fields, our results are quite indicative about the candidate model for Na_2Co_2TeO_6. The XXZ spin model that was recently proposed for the honeycomb cobalt BaCo_2(AsO_4)_2 has a global U(1) symmetry. This implies that the heat transport for the fields in the a and a* directions should behave identically. Our experiments for Na_2Co_2TeO_6, however, show quite distinct behaviors for these two directions. This strongly indicates that the global U(1) symmetry should be absent for Na_2Co_2TeO_6. We thus think more anisotropic spin interactions should be included for understanding the physics of Na_2Co_2TeO_6. These results put some constraints on the candidate models for Na_2Co_2TeO_6.
§ SUMMARY
In summary, we study both the in-plane thermal conductivity and thermal Hall conductivity of Na_2Co_2TeO_6 in magnetic fields along the honeycomb layer. It is found that the zero-field thermal conductivity displays weak in-plane anisotropy, while the magnetic fields along the a or a* axis affect the κ_xx quite differently. The field-induced magnetic transitions induce anomalies at the low-temperature κ^a_xx(B) and κ^a*_xx(B) curves. Although most of these transitions have been probed by various experiments including thermal conductivity, the present κ^a_xx(B) data for B ∥ a reveal an unexplored magnetic transition at very low temperatures. Either the a-axis or the a*-axis magnetic fields can induce planar thermal Hall effect, which is mainly related to the magnon Berry curvature rather than the field-induced magnetic transitions. These results put clear constraints on the in-plane high-field phase of Na_2Co_2TeO_6 and the anisotropic spin interactions must be included in theoretical models.
This work was supported by the National Natural Science Foundation of China (Grant Nos. 12274388, 12174361, 12104010, and 12104011), the Nature Science Foundation of Anhui Province (Grant Nos. 1908085MA09 and 2108085QA22), and the Research Grants Council of Hong Kong with C7012-21GF. The work at the University of Tennessee was supported by the NSF with Grant No. NSF-DMR-2003117.
Kitaev
A. Kitaev, Anyons in an exactly solved model and beyond, Ann. Phys. 321, 2 (2006).
Jackeli
G. Jackeli and G. Khaliullin, Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models, Phys. Rev. Lett. 102, 017205 (2009).
Plumb
K. W. Plumb, J. P. Clancy, L. J. Sandilands, V. Vijay Shankar, Y. F. Hu, K. S. Burch, H.-Y. Kee, and Y.-J. Kim, α-RuCl_3: A spin-orbit assisted Mott insulator on a honeycomb lattice, Phys. Rev. B 90, 041112(R) (2014).
NatureReviewPhysics
H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. E. Nagler, Concept and realization of Kitaev quantum spin liquids, Nat. Rev. Phys. 1, 264 (2019).
RuCl3-1
Y. Kasahara, T. Ohnishi, Y. Mizukami, O. Tanaka, S. Ma, K. Sugii, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, T. Shibauchi, and Y. Matsuda, Majorana quantization and half-integer thermal quantum Hall effect in a Kitaev spin liquid, Nature 559, 227 (2018).
RuCl3-2
T. Yokoi, S. Ma, Y. Kasahara, S. Kasahara, T. Shibauchi, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, C. Hickey, S. Trebst, and Y. Matsuda, Half-integer quantized anomalous thermal Hall effect in the Kitaev material candidate α-RuCl_3, Science 373, 568 (2021).
RuCl3-3
J. A. N. Bruin, R. R. Claus, Y. Matsumoto, N. Kurita, H. Tanaka, and H. Takagi, Robustness of the thermal Hall effect close to half-quantization in α-RuCl_3, Nat. Phys. 18, 401 (2022).
Liu1
H. Liu and G. Khaliullin, Pseudospin exchange interactions in d^7 cobalt compounds: Possible realization of the Kitaev model, Phys. Rev. B 97, 014407 (2018).
Sano
R. Sano, Y. Kato, and Y. Motome, Kitaev-Heisenberg Hamiltonian for high-spin d^7 Mott insulators, Phys. Rev. B 97, 014408 (2018).
Liu2
H. Liu, J. Chaloupka, and G. Khaliullin, Kitaev Spin Liquid in 3d Transition Metal Compounds, Phys. Rev. Lett. 125, 047201 (2020).
Zhong
R. Zhong, T. Gao, N. P. Ong, and R. J. Cava, Weak-field induced nonmagnetic state in a Co-based honeycomb, Sci. Adv. 6, eaay6953 (2020).
Halloran
T. Halloran, F. Desrochers, E. Z. Zhang, T. Chen, L. E. Chern, Z. Xu, B. Winn, M. Graves-Brook,
M. B. Stone, A. I. Kolesnikov, Y. Qiu, R. Zhongg, R. Cava, Y. B. Kim, and C. Broholm, Geometrical frustration versus Kitaev interactions in BaCo_2(AsO_4)_2, PNAS 120, e2215509119 (2023).
Das
S. Das, S. Voleti, T. Saha-Dasgupta, and A. Paramekanti, XY magnetism, Kitaev exchange, and long-range frustration in the J_eff = 1/2 honeycomb cobaltates, Phys. Rev. B 104, 134425 (2021).
Songvilay
M. Songvilay, J. Robert, S. Petit, J. A. Rodriguez-Rivera, W. D. Ratcliff, F. Damay, V. Balédent, M. Jiménez-Ruiz, P. Lejay, E. Pachoud, A. Hadj-Azzem, V. Simonet, and C. Stock, Kitaev interactions in the Co honeycomb antiferromagnets Na_3Co_2SbO_6 and Na_2Co_2TeO_6, Phys. Rev. B 102, 224429 (2020).
Lin
G. Lin, J. Jeong, C. Kim, Y. Wang, Q. Huang, T. Masuda, S. Asai, S. Itoh, G. Günther, M. Russina, Z. Lu, J. Sheng, L. Wang, J. Wang, G. Wang, Q. Ren, C. Xi, W. Tong, L. Ling, Z. Liu, H. Zhou, X. Wang, J. G. Park, Y. Wan, and J. Ma, Field-Induced Quantum Spin Disordered State in Spin-1/2 Honeycomb Magnet Na_2Co_2TeO_6, Nat. Commun. 12, 5559 (2021).
Kim
C. Kim, J. Jeong, G. Lin, P.Park, T.Masuda, S. Asai, S. Itoh, H.-S. Kim, H. Zhou, J. Ma, and J.-G. Park, Antiferromagnetic Kitaev interaction in J_eff = 1/2 cobalt honeycomb materials Na_3Co_2SbO_6 and Na_2Co_2TeO_6, J. Phys.: Condens. Matt. 34, 045802 (2022).
Samarakoon
A. M. Samarakoon, Q. Chen, H.Zhou, and V. O. Garlea, Static and dynamic magnetic properties of honeycomb lattice antiferromagnets Na_2M_2TeO_6, M = Co and Ni, Phys. Rev. B 104, 184415 (2021).
Sanders
A. L. Sanders, R. A. Mole, J. Liu, A. J. Brown, D. Yu, C. D. Ling, and S. Rachel, Dominant Kitaev interactions in the honeycomb materials Na_3Co_2SbO_6 and Na_2Co_2TeO_6, Phys. Rev. B 106, 014413 (2022).
Yao1
W. Yao, K. Iida, K. Kamazawa, and Y. Li, Excitations in the Ordered and Paramagnetic States of Honeycomb Magnet Na_2Co_2TeO_6, Phys. Rev. Lett. 129, 147202 (2022).
Xiao
G. Xiao, Z. Xia, W. Zhang, X. Yue, S. Huang, X. Zhang, F. Yang, Y. Song, M. Wei, H. Deng, and D. Jiang, Crystal Growth and the Magnetic Properties of Na_2Co_2TeO_6 with Quasi-Two-Dimensional Honeycomb Lattice, Cryst. Growth Des. 19, 2658 (2019).
Li
N. Li, R. R. Neumann, S. K. Guang, Q. Huang, J. Liu, K. Xia, X. Y. Yue, Y. Sun, Y. Y. Wang, Q. J. Li, Y. Jiang, J. Fang, Z. Jiang, X. Zhao, A. Mook, J. Henk, I. Mertig, H. D. Zhou, and X. F. Sun, Magnon-Polaron Driven Thermal Hall Effect in a Heisenberg-Kitaev Antiferromagnet, arXiv:2201.11396 (2022).
Lefrancois
E. Lefrançois, M. Songvilay, J. Robert, G. Nataf, E. Jordan, L. Chaix, C. V. Colin, P. Lejay, A. Hadj-Azzem, R. Ballou, and V. Simonet, Magnetic properties of the honeycomb oxide Na_2Co_2TeO_6, Phys. Rev. B 94, 214416 (2016).
Bera
A. K. Bera, S. M. Yusuf, A. Kumar, and C. Ritter, Zigzag antiferromagnetic ground state with anisotropic correlation lengths in the quasi-two-dimensional honeycomb lattice compound Na_2Co_2TeO_6, Phys. Rev. B 95, 094424 (2017).
Chen
W. Chen, X. Li, Z. Hu, Z. Hu, L. Yue, R. Sutarto, F. He, K. Iida, K. Kamazawa, W. Yu, X. Lin, and Y. Li, Spin-orbit phase behavior of Na_2Co_2TeO_6 at low temperatures, Phys. Rev. B 103, L180404 (2021).
Lee
C. H. Lee, S. Lee, Y. S. Chio, Z. H. Jang, R. Kalaivanan, R. Sankar, and K.-Y. Chio, Multistage development of anisotropic magnetic correlations in the Co-based honeycomb lattice Na_2Co_2TeO_6, Phys. Rev. B 103, 214447 (2021).
Hong
X. Hong, M. Gillig, R. Hentrich, W. Yao, V. Kocsis, A. R. Witte, T. Schreiner, D. Baumann, N. Pérez, A. U. B. Wolter, Y. Li, B. Büchner, and C. Hess, Strongly scattered phonon heat transport of the candidate Kitaev material Na_2Co_2TeO_6, Phys. Rev. B 104, 144426 (2021).
Yao2
W. Yao and Y. Li, Ferrimagnetism and anisotropic phase tunability by magnetic fields in Na_2Co_2TeO_6, Phys. Rev. B 101, 085120 (2020).
Lin2
G. Lin, Q. Zhao, G. Li, M. Shu, Y. Ma, J. Jiao, Q. Huang, J. Sheng, A. Kolesnikov, L. Li, L. Wu, X. Wang, H. Zhou, Z. Liu, and J. Ma, Evidence for field induced quantum spin liquid behavior in a spin-1/2 honeycomb magnet, https://doi.org/10.21203/rs.3.rs-2034295/v1 (2022).
Guang
S. K. Guang, N. Li, R. L. Luo, Q. Huang, Y. Y. Wang, X. Y. Yue, K. Xia, Q. J. Li, X. Zhao, G. Chen, H. D. Zhou, and X. F. Sun, Thermal transport of fractionalized antiferromagnetic and field-induced states in the Kitaev material Na_2Co_2TeO_6, Phys. Rev. B 107, 184423 (2023).
Takeda
H. Takeda, J. Mai, M. Akazawa, K. Tamura, J. Yan, K. Moovendaran, K. Raju, R. Sankar, K.-Y. Choi, and M. Yamashita, Planar thermal Hall effects in the Kitaev spin liquid candidate, Na_2Co_2TeO_6, Phys. Rev. Res. 4, L042035 (2022).
Neumann
R. R. Neumann, A. Mook, J. Henk, and I. Mertig, Thermal Hall Effect of Magnons in Collinear Antiferromagnetic Insulators: Signatures of Magnetic and Topological Phase Transitions, Phys. Rev. Lett. 128, 117201 (2022).
Chen
X.-T. Zhang, Y. H. Gao, and G. Chen, Thermal Hall effects in quantum magnets, arXiv:2305.04830 (2023).
Chern
L. E. Chern, E. Z. Zhang, and Y. B. Kim, Sign Structure of Thermal Hall Conductivity and Topological Magnons for In-Plane Field Polarized Kitaev Magnets, Phys. Rev. Lett. 126, 147201 (2021).
Zhang
E. Z. Zhang, L. E. Chern, and Y. B. Kim, Topological magnons for thermal Hall transport in frustrated magnets with bond-dependent interactions, Phys. Rev. B 103, 174402 (2021).
|
http://arxiv.org/abs/2307.05361v1 | 20230708230112 | A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle Force and Joint Kinematics | [
"Yue Shi",
"Shuhao Ma",
"Yihui Zhao",
"Zhiqiang Zhang"
] | eess.SP | [
"eess.SP",
"cs.AI",
"cs.LG",
"cs.RO"
] |
Robotic Ultrasound Imaging: State-of-the-Art and Future Perspectives
[
====================================================================
Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis of the dynamic interplay among neural muscle stimulation, muscle dynamics, and kinetics. Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner. However, the small sample nature and physical interpretability of biomechanical analysis limit the applications of DNNs.
This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics. This method seamlessly integrates Lagrange's equation of motion and inverse dynamic muscle model into the generative adversarial network (GAN) framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references.
Experimental validations are conducted on two scenarios (i.e. the walking trials and wrist motion trials). Results indicate that the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics, which outperforms the selected benchmark methods, including physics-informed convolution neural network (PI-CNN), vallina generative adversarial network (GAN), and multi-layer extreme learning machine (ML-ELM).
§ INTRODUCTION
Human movements involve complex interactions within the neuromuscular system. The surface electromyography (sEMG)-driven estimation of muscle force and joint kinematics dynamics provides detailed biomechanical analysis to understand the neuromuscular system <cit.>, which benefits various applications, such as sports rehabilitation treatments <cit.>, <cit.>, and optimizing robotic design for individuals with impairments <cit.>. Although physics-based models explicitly explain and map sEMG signals to joint kinematics, the high cost of their static optimization has always limited the practical applications of these models <cit.>.
Recently, deep neural networks (DNNs) provide an alternative solution to map the sEMG signals to the joint kinetics and kinematics <cit.>. In this kind of model, the multi-layer convolution architecture has been explored to establish relationships between movement variables and neuromuscular status <cit.>. For example, Nasr et al <cit.> mapped the sEMG signals to the regression of joint angle, joint velocity, joint acceleration, joint torque, and activation torque, illustrating that the multi-layer convolution operators are capable of extracting underlying motor control information. Zhang et al <cit.> developed an active deep convolutional neural network to enhance the dynamic tracking capability of the musculoskeletal model on unseen data.
Despite the advantages, traditional DNNs are data-hungry and their performance is highly dependent on the quantity and quality of data <cit.>. Meanwhile, biomechanics analysis is typically a physics-based extrapolation process with small sample nature <cit.>. Therefore, it is a challenge to train DNNs with small sample data so that the DNNs perform consistently with the physics-based model. To fill this research gap, the low-shot learning (LSL) technique has attracted many researchers' attention <cit.>. For example, Rahimian et al <cit.> introduced a Few-Shot Learning Hand Gesture Recognition (FS-HGR) model to enhance the generalization capability of DNNs from a limited number of instances. Lehmler et al <cit.> explored a low-shot learning methodology that adjusts DNNs to new users with only a small size of training data.
In addition, the generative adversarial network (GAN) framework has shown great potential in handling physical extrapolating and predictive problems <cit.>. The GAN-based model is capable of discovering the structured patterns of the references and extrapolating the underlying data distribution characteristics during the adversarial learning process <cit.>. For example, Chen et al <cit.> tested and evaluated the performance of the deep convolutional generative adversarial network (DCGAN) on sEMG-based data enhancement, and their results indicated that the extrapolated data is able to augment the diversity of the original data. Fahimi et al <cit.> proposed a generative adversarial learning framework for generating artificial electroencephalogram (EEG) data to extrapolate the brain-computer interface, and their findings suggest that generated EEG augmentation can significantly improve brain-computer interface performance.
In this study, we propose a physics-informed low-shot learning method for muscle force and joint kinematics estimation from multi-channel sEMG signals. This method seamlessly integrates physics knowledge with the GAN framework for structured feature decoding and extrapolated estimation from the small sample data. Specifically, Lagrange's equation of motion is introduced into the generative model to restrain the structured decoding of the high-level features following the laws of physics. And a physics-informed policy gradient is designed to improve the adversarial learning efficiency by rewarding the consistent physical representation of the extrapolated estimations and the physical references. Results show the muscle forces and joint
kinematics estimated from the proposed method are unbiased compared to the physics-based inverse dynamics.
The remainder of this paper is organized as follows: Section <ref> detailed describes the algorithm of the proposed physics-informed policy gradient for reinforcement generative adversarial learning, including the mathematics framework of the algorithm and network architectures. Section <ref> presents the material and experimental methods. Section <ref> discusses the experimental results and model evaluations. and Section <ref> presents the conclusions.
§ PHYSICS-INFORMED LOW-SHOT LEARNING METHOD
The continuous estimation of muscle forces (F) and joint kinematics(θ) from multi-channel sEMG can be denoted as the time-series generation problem. Thus, given a real multi-channel sEMG time series, we train a σ parameterized generative network G_σ to estimate the muscle force (F̂) and joint kinematics (θ̂). In this section, we propose a GAN framework, as shown in Fig.<ref>, to train the G_σ on the small sample data.
Specifically, we denote the F̂ and θ̂ estimated by G_σ as the negative samples (see details in Section <ref>), the ground truth (θ) and the inverse dynamics-based (F) <cit.> as positive samples (i.e. references). The ϕ-parameterized discriminative model D_ϕ is introduced to distinguish the positive samples and negative samples (see details in Section <ref>). During adversarial learning, the task of D_ϕ is to determine if an input sample is positive or negative, and the task of G_σ is to generate the unbiased negative samples to fool the discriminator D_ϕ. The model optimization process is driven by the newly proposed physics-informed policy gradient (see details in Section <ref>) which rewards the homogeneity of physics representation and structural characteristics between the positive and negative samples.
§.§ GAN optimization via physics-informed policy gradient
The physics-informed policy gradient method, inspired by reinforcement learning <cit.>, aims to optimize the learning process of the GAN-based model yielding physical extrapolations from the small sample data (i.e. low-shot learning). Mathematically, the physics-informed policy gradient method maximizes its expected reward J(σ) based on the physics law and structured characteristics from the small sample data. The J(σ) consists of two parts, the structural reward R_G_σ and physics representation action Q_D(ϕ)^G(σ). The J(σ) is defined as follows.
J(σ) = 𝔼[R_G_σ(G_σ(sEMG_0:T))]
· Q_Dϕ^Gσ((G_σ(sEMG_0:T), [F,θ]_0:T)
= 𝔼[R_G_σ ([F̂, θ̂]_0:T)]
· Q_Dϕ^Gσ([F̂, θ̂]_0:T, [F, θ]_0:T)
where sEMG_0:T is the input multi-channel sEMG time series for T time steps. The J(σ) is beginning with the expected reward from a predetermined state from the positive samples. And then, the R_G_σ and Q_D(ϕ)^G(σ) will jointly optimize the generative network G_σ to generate the unbiased ([F̂, θ̂]_0:T) following the physics laws.
Specifically, the structural reward R_G_σ is computed by the G_σ and defined as follows.
R_G(([F̂, θ̂]_0:T) = exp ^ PL^2 ([F̂, θ̂]_0:T)
where PL([F̂, θ̂]_0:T) is the physics law used to restrict the hierarchical structure of the generated data, which provides the additional information to the regularize the learning process from the small sample data. In this case, we use the Lagrange equation of motion <cit.> as the physics law, which is defined as follows.
PL([F̂, θ̂]_0:T) = 1/T∑_t=1^T (m(θ̂_t)θ̈̂̈_t + c(θ̂_t, θ̇̂̇_t
+ g(θ̂_t) - ∑_n=1^NF̂^n_t)^2
where T is the number of time-steps, N is the channels of the F̂, m(θ̂_t), c(θ̂_t, θ̇̂̇_t, and g(θ̂_t) denote mass matrix, the Centrifugal and Coriolis force, and the gravity, respectively <cit.>. In this manner, the G_σ will generate the structured outputs of (F̂, θ̂).
The Q_D(ϕ)^G(σ) is computed by the D(ϕ) and interprets the physics constraint action values as the estimated probability of being physics real by D(ϕ). These physics constraint action values lead to the improvement of GAN model in physical extrapolation from the small training data. The Q_D(ϕ)^G(σ) can be formulated as:
Q_Dϕ^Gσ((G_σ( sEMG_0:T), [F, θ]_0:T) =
𝔼_[F̂, θ̂]_0:T∼ [F, θ]_0:T [log Dϕ([F̂, θ̂]_0:T)] +
𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T))[log (1-Dϕ([F̂, θ̂]_0:T))]
For each epoch, once the new R_G and Q_D(ϕ)^G(σ) has been obtained, the policy model G(σ) will be updated following the gradient of the reward function as follows.
∇_σ J(σ) = 𝔼_[F̂, θ̂]_0:T∼ G_σ(sEMG_0:T)∑∇_σ R_G_σ([F̂, θ̂]_0:T|[F, θ]_0:T)
· Q^G_σ_D_ϕ ([F̂, θ̂]_0:T, [F, θ]_0:T)
Using likelihood ratios, the unbiased estimation for Eq. <ref> on one epoch can be described as follows.
∇_σJ(σ) ≃1/T∑_t=1^T∑_y_t ∈ [F̂, θ̂]_t∇_σ R_G_σ(y_t|[F, θ]_t) · Q^G_σ_D_ϕ (y_t, [F, θ]_t)
=1/T∑_t=1^T ∑_y_t ∈ [F̂,θ̂]_t G_σ(y_t|[F, θ]_t) ∇_σlog G_σ(y_t|[F, θ]_t)
· Q^G_σ_D_ϕ(y_t, [F, θ]_t)
The parameters of the policy model G_σ can be updated as follows.
σ←σ + α∇_σ J(σ)
where α∈ℝ is the learning rate.
To summarize, Algorithm 1 provides an in-depth look at our proposed GAN optimization via a physics-informed policy gradient. Initially, G_σ is pre-trained on the training set sEMG = {X_1:T} using the maximum likelihood estimation (MLE). And then, the G_σ and D_ϕ undergo adversarial learning. As the G_σ improves, the D_ϕ is routinely retrained to stay synchronized with the G_σ improvement. We ensure balance by generating an equal number of negative samples for each training step as the positive samples.
§.§ The generative network
The proposed physics-informed low-shot learning method does not depend on the specific generative network architecture. In this study, considering the long-term temporal dependencies of the F and θ sequences to the input multi-channel sEMG sequence, we employ the Long Short-Term Memory (LSTM) cells to our generative model <cit.>. The architecture of the generator network G is shown in Fig.<ref>. It serves three functions: multi-channel sEMG feature extraction, residual learning with LSTM, and musculoskeletal tokens sequence generation.
Firstly, for the multi-channel sEMG feature extraction, a 1-dimensional (1D) convolution filter with a 2 /times 1 kernel is introduced to capture the multiple sEMG features at time step t. The extracted convolution features represent the hierarchical structures of the multi-channel sEMG. In this study, the convolution kernel is set to 1 × b for a b-channel sEMG input. Considering the batch normalization (BN) layer would normalize the features and get rid of the range flexibility for upscaling features <cit.>, no BN layer is used here to avoid blurring the sEMG responses hidden in the extracted features. The max-pooling layer is used to combine the extracted sEMG features into a single neuron by using the maximum value from each convolution window. The max-pooling operation reduces the number of parameters and network computation costs and has the effect of adjusting over-fitting.
Secondly, the LSTM blocks are employed for residual learning of the time-series characteristics of the target musculoskeletal tokens. The LSTM layer is well suited for time-series sequence generation by addressing the explosive and vanishing gradient issues <cit.>. An LSTM block consists of a memory cell, an input gate, an output gate, and a forget gate, the detailed definitions of the components are described in <cit.>'s study. Specifically, in this study, in time step t, the memory cell remembers structured feature values over the previous t-1 intervals and the three gates regulate the flow of information into and out of the memory cell, which has a great preference for preserving long-term temporal structure characteristics by consolidating previous temporal correlations as memory units. Meanwhile, the high-level sEMG features extracted from the convolution layer represent the current multi-channel sEMG responses to muscle force and joint kinematics. The skip-connect of the memory cell and the high-level sEMG features not only represent extracted local kinetic invariances but also represent the temporal dynamics of the motions.
It is noteworthy that the traditional LSTM layer only produces fitness between the current time step and the previous time steps. However, we expect the model also can pay insight into the resulting future outputs. In order to compute the action value for future physical fitness, a Monte Carlo (MC) search with a roll-out strategy is used to sample the unknown last T-t time steps. and the N-time Monte Carlo search can be formulated as:
{(F_0:T, θ_0:T)^1, ..., (F_0:T, θ_0:T)^N = MC(F_0:t, θ_0:t)}
Finally, the fully connected layers are used to generate the musculoskeletal tokens sequence over a motion period. The output of the LSTM unit is flattened to a feature vector and scaled to the muscle force F and joint kinematics θ.
§.§ The discriminative model
In this study, a ϕ parameterized discriminator network D_ϕ is built to guide the iterations of G_σ from the small sample data. D_ϕ outputs a probability indicating the heterogeneity between [F̂, θ̂] and [F, θ]. For this purpose, we employ a convolution neural network (CNN) <cit.> as the discriminative model because of its successful applications in sequence classification. In this study, we concentrate on the situation where the discriminator estimates the likelihood of a completed [F̂, θ̂] time-series from the physical-law model (i.e. ID).
We first represent an input muscle force and joint kinematics time series x_1,...,x_T as
E_0:T = [F̂, θ̂]_0 ⊕ [F̂, θ̂]_2 ⊕ ... ⊕ [F̂, θ̂]_T
where, x_t ∈ℝ^b is the muscle force and joint kinematics in time-step t and ⊕ is the concatenation operator to build the matrix E_1:T∈ℝ^T. Then the convolution operator is used to produce a new feature map:
c_i = ρ(w ⊙ E_i:i+l-1 + b)
where ⊙ is the element-wise production, b is a bias term and ρ is a non-linear function. In this study, the discriminator, as shown in Fig.<ref>, employs various numbers of kernels with different window sizes to extract different features from the input musculoskeletal sequence. And the max-pooling operation over the feature maps to reduce the number of parameters and network computation costs. In order to enhance the discrimination performance, a highway operator <cit.> based on the pooled feature maps is also employed in our discriminative model. Finally, a fully connected layer with softmax activation is used to output the estimation of the likelihood that the input sequence conforms to physical laws.
§ MATERIAL AND EXPERIMENTAL METHODS
In this study, we test our proposed method on two joint motion scenarios. The first one is the knee joint modeling from an open-access dataset of walking trials, and the second one is the wrist joint modeling from the self-collected dataset of wrist motions.
§.§ Open-access dataset of walking trials
The open-access dataset of walking trails is obtained from a real-world experiment reported in <cit.>. This dataset involves six healthy participants with an average age of 12.9 ± 3.2 years and an average weight of 51.8 ± 19.1 Kg. Participants are instructed to walk at four distinct speeds, which include very slow (0.53 ± 0.1 m/s), slow (0.75 ± 0.1 m/s), free (1.15 ± 0.08 m/s), and fast (1.56 ± 0.21 m/s) speeds. The sEMG signals are captured from the biceps femoris short head (BFS) and the rectus femoris (RF) as they are the primary flexor and extensor of the knee joint. In this study, we normalize each gait cycle into 100 frames for model training and testing, and the original data for model extrapolation evaluation. In the model training and testing session, each walking trial sample is formatted into a source matrix that includes the time step, gait motion data, and enveloped sEMG signals. All of the samples from different participants are combined to create a comprehensive dataset for model training and testing.
§.§ Self-collected dataset of wrist motions
Our wrist motions experiment, approved by the MaPS and Engineering Joint Faculty Research Ethics Committee of the University of Leeds (MEEC 18-002), involved six participants with signed consent. Participants were instructed to keep their torso straight with their shoulder abducted at 90 degrees and their elbow joint flexed at 90 degrees. The VICON motion capture system is used to record continuous wrist flexion/extension motion. Joint motions are calculated using an upper limb model with 16 reflective markers with 250 Hz sampling rate. Concurrently, sEMG signals are captured from the primary wrist muscles (n = 1, 2,..., 5), including the flexor carpi radialis (FCR), the flexor carpi ulnaris (FCU), the extensor carpi radialis longus (ECRL), the extensor carpi radialis brevis (ECRB), and the extensor carpi ulnaris (ECU) using Avanti Sensors (sampling rate is 2000 Hz). Electrodes are placed by palpation and their placement is validated by observing the signal during contraction before the experiment. The sEMG signals and motion data were synchronized and resampled at 1000 Hz. Each participant performed five repetitive trials with a three-minute break between trials to prevent muscle fatigue.
The recorded sEMG signals are pre-processed by a 20 Hz and 450 Hz band-pass filter, full rectification, and a 6 Hz low-pass filter. These signals are then normalized based on the maximum voluntary contraction recorded prior to the experiment, yielding the enveloped sEMG signals. We normalize each motion cycle into 156 frames for model training and testing, and the original data for model extrapolation evaluation. A total of 360 motion data are then combined to create a comprehensive dataset for model training and testing, and 6 motion data are used for model evaluation.
§.§ Benchmark models and parameter settings
To evaluate the performance and effectiveness of the proposed physics-informed policy gradient for low-shot generative adversarial learning, the benchmark models employ three representative methods, including physics-Informed convolutional neural network (PI-CNN) <cit.> which represents the state-of-the-art deep learning based musculoskeletal modeling method, ML-ELM <cit.> which represents the general musculoskeletal modeling method, and the vanilla GAN which represents the traditional GAN family without physical-law <cit.>.
§.§ Evaluation metrics
The evaluation metrics include 1) the metrics for evaluating the quality of the generated samples including the information entropy associated peak signal-to-noise ratio (PSNR) <cit.>, coefficient of Determination (R^2) <cit.>, root mean square error (RMSE) <cit.>, Spearman's Rank Correlation Coefficient (SRCC) <cit.>, and 2) the metrics for evaluating the mode collapse of GANs, including 1) inception score (IS) <cit.>, and 2) Frechet inception distance (FID) <cit.>.
§ RESULTS AND DISCUSSION
In this section, we evaluate the performance of the proposed physics-informed low-shot learning in the knee joint and wrist joint scenarios. We first carry out overall comparisons of the results from the proposed and benchmark methods. We also evaluate the model performance on small training data and handling mode collapse. Lastly, we investigate the robustness and generalization performance of the proposed method in intersession scenarios. The training of the proposed framework and benchmark methods was conducted using PyTorch on a workstation equipped with NVIDIA Quadro K4200 graphics cards and 256G RAM.
§.§ Overall evaluation of the muscle force dynamics modeling
In this section, we first carry out overall comparisons between the proposed and benchmark methods on the test dataset. Fig. <ref> demonstrates the overall results of the joint kinematics generation in one motion circle from the proposed and benchmark methods for both the knee joint (the first row of Fig. <ref>) and wrist joint cases (the second row of Fig. <ref>). The average joint kinematics and standard deviation distribution from the proposed method align well with the ground truth in both the knee joint and wrist joint cases. These findings indicate the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the joint kinematics.
Similarly, Fig. <ref> and Fig.<ref> demonstrate the overall results of the muscle force estimations in one motion circle for both the knee joint (i.e. RF and BFS) and wrist joint (i.e. FCR, FCU, ECRL, ECRB, and ECU) cases, respectively. The average muscle forces estimated by the proposed method align well with the inverse dynamics, demonstrating the excellent multiple muscle tracking capability of the proposed model. In addition, the standard deviation distribution of the proposed model-generated muscle forces is perfectly consistent with the standard deviation distribution of the inverse dynamics-based references. These results indicate that the proposed model achieves the best performance among the benchmark models on the unbiased estimation of the muscle force from the multi-channel sEMG signals.
To further assess the extrapolation performance quantitatively, we present detailed comparisons of the proposed and benchmark models on both of the test data and evaluation data. Table <ref> and Table <ref> respectively shows the results for the knee joint case and the wrist joint case. The results indicate that the proposed model performs best on both of the testing and evaluation data. Specifically, for model testing, the PSNR, R^2, RMSE, SRCC of the proposed model are 15.57%, 6.22%, 28.08%, 7.2% higher than that of the second best model (i.e. PI-CNN). For model evaluation, the PSNR, R^2, RMSE, SRCC of the proposed model are 24.72%, 16.29%, 38.99%, 17.66% higher than that of the second best model (i.e. GAN). In addition, because the evaluation data involve the original sEMG recordings, the comparison of the testing results and evaluation results indicates the model extrapolation from the experimental scenarios to real scenarios. The proposed model shows the best extrapolated estimation of muscle force and joint kinematics among the benchmark models, the results from the testing data and evaluation data is consistent. In contrast, the performance of the benchmark models show serious decline on evaluation data.
§.§ Evaluation of low-shot learning
The proposed physics-informed policy gradient incorporates the temporal relationship of the muscle force and joint kinematics dynamics from the Lagrange motion equation, resulting in an improved kinetics estimation from the low-shot samples. Initially, the physical information is used to constrain the model reward accumulated following the periodic multi-channel sEMG signals. And then, the accumulative reward is used to guide the Monte Carlo search to generate the unbiased estimation of muscle force and joint kinematics dynamics. To quantitatively assess the effectiveness of the proposed method on low-shot learning, we firstly regard the modeling results shown in Table <ref> and Table <ref> as the baselines that represent the optimal performance of the proposed and benchmark models, and then we train the models with different training sample sizes for 1500 epochs as low-shot learning learning. The percentages of the low-shot learning learning results and the baseline joint kinematics modeling results, denote as P-PSNR, P-R^2, P-RMSE, and P-SRCC, are used as the evaluation metrics to describe what percentage of the performance of the baseline models can be achieved with the new models.
The evaluation of the low-shot learning of the proposed and benchmark models on the knee joint and wrist joint kinematics modeling is shown in Table <ref>. It is obvious that the proposed model with a physics-informed policy gradient outperforms all of the benchmark models in low-shot learning. The 10-shot learning is able to achieve over 80% baseline performance in terms of PSNR, R^2, RMSE, and SRCC. In comparison, the PINN and GAN models achieving a similar modeling performance require at least 80-shot learning. Therefore, it can be inferred that the proposed physics-informed policy gradient relies heavily on the physical representations and temporal structural characteristics of the training data, rather than the quantity of the data. This is encouraging as it suggests that the proposed method facilitates the applications of deep learning in biomechanical engineering from the general issue of limited sample size.
§.§ Mode collapse evaluation
Mathematically, the generative model is easy to find a biased estimation caused by mode collapse, which leads to the generated samples only being located in the partial real distribution where it can fool the discriminative model and ignore other modes of real distribution during the adversarial learning. To handle this issue, the proposed physics-informed policy gradient alleviates the random noises and makes the generated feature sequence governed by the physics law, which facilitates the estimation of compound kinematics patterns and achieves the unbiased estimation of kinematics generation.
In order to evaluate the performance of the proposed method on alleviating the mode collapse, we test and compare the proposed model with the benchmark model from two aspects: 1) a quantitative evaluation of the diversity of the generated motions, based on the distance-derived IS and FID metrics; and 2) a monotonicity assessment on the generator iterations during the network training process; and 3) visualization of the distributions of the real and the generated motion samples.
Firstly, the quantitative evaluation for the diversity of the generated motions is conducted on the testing dataset. The higher IS and lower FID indicate the better diversity of the generated super-resolution HSIs, which further indicates the alleviation of mode collapse.
The results demonstrated in Table <ref> show the proposed model outperforms the competitors in terms of the IS and FID measurements for both the knee joint and wrist joint motion generation. In addition, the benchmark GAN model, with the network architecture as same as the proposed model, is 19.11% higher in IS, and 14.23% lower in FID than the proposed model. These findings suggest that the proposed physics-informed policy gradient optimization approach has great performance in alleviating the mode collapse during adversarial learning.
Secondly, in order to further explore the performance of the proposed physics-informed policy gradient on the mode collapse issue, we compare the generator iterations of the same GAN architectures with and without the physics-informed policy gradient (Fig. <ref>). The IS and FID curves from the GAN with the proposed physics-informed policy gradient are more monotonous than the GAN without the physics-informed policy gradient, along with the increase of iteration number. Thus, the curves of IS from the proposed physics-informed policy gradient steadily increase and the curves of FID steadily decrease for both knee joint (<ref>a and b) and wrist joint (<ref>c and d) cases.
§.§ Model application on intra-session scenario
In musculoskeletal modeling, the intra-session scenario is regarded as the multiple sets of motions that occur within the same session. To test the robustness of the proposed model in the intra-session scenario, we use the knee joint data with different walking speeds for one subject as the intra-session evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best among the baseline methods. Importantly, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different walking speeds. In comparison, the median and quartiles of the baseline methods, such as the GAN model without using the physics-informed policy gradient, show significant inconsistencies with the real data, indicating a declined performance in the intra-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in intra-session scenarios.
§.§ Model application on inter-session scenario
The inter-session scenario generally refers to a situation where motion data are collected across multiple sessions. To test the robustness of the proposed model in the inter-session scenario, we use the wrist joint data with different subjects as the evaluation dataset. The muscle force and joint kinematics modeling results, as shown in Fig. <ref>, indicate that the proposed framework performs best on the musculoskeletal modeling among the baseline methods. Specifically, the median and interquartile values of the proposed model with physics-informed policy gradient remain consistent with the real data across different subjects. In comparison, the baseline methods, such as the GAN model without using the physics-informed policy gradient, show a declined performance in the inter-session scenario due to the variability in walking speeds. These findings suggest that the model optimized by the proposed physics-informed policy gradient has great robustness in inter-session scenarios.
§ CONCLUSION
This paper develops a physics-informed low-shot learning method, which seamlessly integrates the Lagrange equation of motion and inverse dynamic muscle model into the adversarial learning process, to train the generative network for the unbiased estimation of the muscle force and joint kinematics from the small size sEMG time series. Specifically, the Lagrange equation of motion is introduced as physical constraint, which facilitates the generator to estimate the muscle force and joint kinematics with more temporal structural representations. Meanwhile, the physics-informed policy gradient rewards the physical consistency of the generated muscle force and joint kinematics and the inverse dynamics-based references, which improve the extrapolation performance of the generative network. Comprehensive experiments on the knee joints and wrist joints indicate the feasibility of the proposed method. The resultant findings suggest that the proposed method performs well in handling the mode collapse issue on the small sample data, and the estimations of the muscle forces and joint kinematics are unbiased compared to the physics-based inverse dynamics. These findings suggest that the proposed method may reduce the gaps between laboratory prototypes and clinical applications. However, it is worth noting that the physics reference (i.e. the inverse dynamics for this study) plays an important role in constraining the physics representation of the generated samples. Therefore, the choice of physics module may vary when the proposed approach is extended to other application cases.
Going forward, we plan to delve deeper into the properties of the physics-informed deep learning framework in the context of sEMG-based musculoskeletal modeling. We aim to investigate the potential of the low-shot learning-based model on the continuous and simultaneous estimation of multiple joint kinematic chains from sEMG signals. We also plan to adjust the compositions of the proposed method to cater to different application scenarios. Furthermore, we intend to evaluate the reliability and accuracy of the proposed framework through more complex movements.
unsrtnat
|
http://arxiv.org/abs/2307.04903v1 | 20230710210822 | Negative electrohydrostatic pressure between superconducting bodies | [
"Thomas J. Maldonado",
"Dung N. Pham",
"Alessio Amaolo",
"Alejandro W. Rodriguez",
"Hakan E. Türeci"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"quant-ph"
] |
APS/123-QED
[email protected]
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
Department of Chemistry, Princeton University, Princeton, NJ 08544, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA
Despite being largely limited to bulk phenomena, well-known
theoretical models of superconductivity like the
Bardeen–Cooper–Schrieffer and Ginzburg–Landau theories
have played a key role in the development of superconducting
quantum devices. In this letter, we present a hydrodynamic
non-relativistic scalar electrodynamic theory capable of describing
systems comprising superconducting materials of arbitrary shape and
apply it to predict the existence of a negative (attractive) pressure between planar superconducting
bodies. For conventional superconductors with London penetration
depth λ_L≈ 100 nm, the pressure reaches
tens of N/mm^2 at angstrom separations.
Negative electrohydrostatic pressure between superconducting bodies
Hakan E. Türeci
July 10, 2023
===================================================================
In conventional superconductors, steady-state bulk phenomena are accurately described by both the Bardeen-Cooper-Schrieffer (BCS) <cit.> and Ginzburg-Landau (GL) <cit.> theories. The former provides a microscopic origin for superconductivity via the phonon-mediated pairing of electrons into bosonic quasiparticles known as Cooper pairs, while the latter provides a phenomenological description of the resulting Bose-Einstein condensate <cit.> with a macroscopic order parameter representing its mean-field wave function. The two theories were shown to be equivalent near the superconducting critical temperature <cit.>, and both reproduce the London theory <cit.>. Though the BCS theory is sufficiently general to predict time-dependent bulk phenomena, an effective macroscopic theory is desirable when such effects are triggered by electromagnetic sources in spatially inhomogeneous domains. To this end, generalized GL equations have been proposed to capture boundary and wave effects present in complex geometries <cit.>, but a consensus has not been reached on their validity far below the critical temperature, a regime all too familiar to the burgeoning area of superconducting quantum devices <cit.>.
In this letter, we present and explore predictions offered by a hydrodynamic representation of non-relativistic scalar electrodynamics applied to the superconducting order parameter at zero temperature. Few attempts have been made to solve this model's equations of motion (EOM) exactly <cit.>, but simplified versions have been considered via relaxations of minimal coupling <cit.> and can be credited as the underpinning of Josephson phenomena and circuit quantum electrodynamics <cit.>. Such approximate descriptions of light-matter interactions have enabled coveted numerical analyses of superconducting circuits embedded in electromagnetic resonant structures <cit.>, but they rely on London-like boundary conditions between superconducting and non-superconducting domains that seem to harbor serious inconsistencies <cit.>. Our goal is not to provide a rigorous derivation of the theory (the literature contains some attempts <cit.>), but rather to demonstrate that its un-approximated form circumvents spatial partitioning and implies a pressure between planar superconducting bodies that can be measured to determine its validity.
While our model shares similarities with the GL theory in that it describes the superconducting condensate with an order parameter, it differs in at least four important ways. First, in contrast to the diffusive time-dependent GL equations, our model entails wave-like dynamics implied by Schrödinger's equation. Second, we employ minimal coupling to all electromagnetic degrees of freedom, including the electric field via Gauss's law and Maxwell's correction to Ampere's law. Third, we incorporate arbitrary arrangements of both external drives and ionic backgrounds via normal (non-superconducting) source distributions. We take the latter to be static in nature, akin to the Jellium model of a metallic conductor <cit.>, but generalizable to include dynamical fluctuations for effective descriptions of phononic excitations. Fourth, in considering regimes far below the critical temperature, we omit the self-interaction term that governs the GL phase transition. In our model, nonlinear phenomena arise instead from our more general treatment of light-matter interactions, and the Higgs mechanism that yields the condensate's equilibrium number density via spontaneous symmetry breaking of the U(1) gauge group is replaced by requirement from the EOM that the bulk superconducting charge density cancels the prescribed ionic background.
Below, we present the Lagrangian and corresponding EOM at the heart of our model, along with an electrohydrodynamic representation of the Hamiltonian. Limiting our focus to electrostatic systems, we derive an electrohydrostatic condition arising from a self-consistent statement of Gauss's law and solve it numerically in the context of two planar superconducting bodies separated by vacuum. By considering variations in the system's electrohydrostatic energy with respect to the separation length, we find a negative (attractive) pressure between the two bodies that peaks at an emergent healing length. We conclude with a discussion of the length's significance.
Throughout the text, we employ the covariant formulation of electromagnetism with the Minkowski metric η^μν = diag(+,-,-,-)^μν, and we refer to the components of a four-vector as . Though the model describes non-relativistic charged superfluids, we find that a relativistic notation provides useful physical insight. We assume the effective Lagrangian governing the evolution of the order parameter ψ≡√(n)e^iθ and the electromagnetic four-potential A^μ is given by the non-relativistic theory of scalar electrodynamics under minimal light-matter coupling,
ℒ = ψ^*(iħ∂/∂ t - qcA_0 - 1/2m(ħ/i - qA⃗)^2)ψ
-1/4μ_0F^μνF_μν - A^μ j_μ,
where F^μν≡∂^μ A^ν - ∂^ν A^μ is the electromagnetic tensor, j^ν is the four-current generated by normal charges, and q and m are the charge and mass of the superconducting charge carriers, respectively. The resulting set of EOM for the light-matter field arising from this Lagrangian couple Maxwell's equations for the four-potential and Schrödinger's equation for the order parameter,
∂_μ F^μν = μ_0 (𝒥^ν + j^ν)
iħψ̇ = ((ħ/i - qA⃗)^2 + qcA_0)ψ,
where is the four-current generated by superconducting charges with number density n and fluid velocity . As derived in the Supplemental Material (SM) <cit.>, the system's Hamiltonian can be expressed in an electrohydrodynamic form as
ℋ = ϵ_0/2E⃗^2 + 1/2μ_0B⃗^2 + n(1/2mv^2) + ħ^2/8mnln n^2,
with E⃗ = -c A_0 - Ȧ⃗̇ the electric field, the magnetic field, n the superconducting number density, and v ≡v⃗ the fluid speed. Eq. (<ref>) represents a decomposition of the total energy density into electric, magnetic, kinetic, and elastic components, respectively <cit.>.
We now limit our focus to electrostatic systems, which are recovered by enforcing that all currents vanish . We first introduce the bulk superconducting number density n_s and two important length scales: the London penetration depth λ_L = √(m/(μ_0 q^2 n_s)) and the Compton wavelength λ_C = h/(mc). In terms of the normalized number densities and , Eqs. (<ref>) reduce to the electrohydrostatic condition,
n̅ + 2ξ^4√(n̅)/√(n̅) = n̅_src,
revealing the healing length ξ given by
ξ≡√(λ_Lλ_C/4π).
As shown in the SM <cit.>, Eq. (<ref>) is a self-consistent statement of Gauss's law that expresses the balance between electric and elastic forces in the electrostatic distribution of the fluid: qE⃗ = Q with the well-known quantum potential <cit.>. Because of the nonlinear term, proving the existence or
uniqueness of solutions n̅ is nontrivial and remains an open
problem. Moreover, for general source distributions
n̅_src, solutions are most attainable by numerical
methods, which can exhibit instabilities stemming from the potential
divergence of the nonlinear term as the density approaches zero. We
may nonetheless make some qualitative observations regarding solutions
to Eq. (<ref>). First, we anticipate the
asymptotic behavior n̅→n̅_src = 1 in the
bulk. Second, spatial derivatives in the nonlinear term ensure C^4
continuity of n̅ over all spatial coordinates. To avoid
introducing additional length scales, we focus here on
piecewise-constant sources n̅_src that take values zero outside and one inside the superconducting material.
To obtain the electrohydrostatic pressure
between two planar superconducting bodies, we first solve the electrohydrostatic condition sourced by two finitely separated ionic backgrounds. For each separation length L ∈ [0,20ξ], we then integrate the resulting electrohydrostatic energy density,
ℋ = ϵ_0/2ħ^2/2mq√(n)/√(n)^2_u_electric + ħ^2/8m nlnn^2_u_elastic,
over all space V and compute the pressure P = -∇_L ∫_V
ℋdx. Details of the calculation are provided in
Fig. <ref>, with the main conclusion being the existence
of a negative (attractive) pressure between plates that vanishes in
the limit of zero or infinite separation and reaches a peak for L
≈ξ. Since C^4 continuity of the number density is guaranteed by the nonlocal quantum potential <cit.>, all contributions to the electrohydrostatic energy density are finite. Consequently, unlike other quantum forces such as the Casimir pressure <cit.>, the electrohydrostatic pressure does not exhibit a divergence for infinitesimal separations. Though the total pressure is strictly negative, the electric pressure exhibits a zero-crossing which can be understood perturbatively as a screening effect. As derived in the SM <cit.>, for source distributions representing small perturbations from a uniform background, n̅_src = 1 + δn̅_src with δn̅_src≪ 1, the electrohydrostatic condition reduces to a self-sourced version of the inhomogeneous biharmonic equation arising in linear elasticity theory <cit.>,
δn̅ + ξ^4∇^4δn̅≈δn̅_src,
with δn̅≡n̅ - 1 the first order perturbation in the number density and ∇^4 ≡ the biharmonic operator. In contrast to the Yukawa potential arising from Thomas-Fermi screening <cit.>, the Green's function for Eq. (<ref>),
G(x⃗,x⃗') = 1/2π(ξ√(2))^3(x⃗-x⃗'/ξ√(2))e^-x⃗-x⃗'/ξ√(2),
exhibits both decaying and oscillatory behavior on the length scale of the healing length. The oscillatory component of the bulk response to a point source necessarily gives rise to interference effects during the screening of more general defect distributions. We can thus attribute increases (decreases) in electric energy density to constructive (destructive) interference of screening charges.
In the electrostatic limit, the healing length represents the scale on
which the superconducting number density varies in response to changes
in the background. While this interpretation might suggest analogies
with the well-known GL coherence length, as seen from
Eq. (<ref>), the healing length and the London penetration
depth are not independent parameters. As shown in the
SM <cit.>, the few known sources tabulating GL parameters
from independent experiments indicate that our healing length and the
GL coherence length are in poor agreement for most type-I
superconductors but only differ by about one order of magnitude for many
type-II superconductors <cit.>. This trend lends further support to
the notion that the hydrodynamic model is likely most valid at
temperatures far below the critical temperature, making type-II
superconductors ideal candidates for experimental validation of the theory. Furthermore, since
the healing length sets the scale underlying pressure variations, materials with
large London penetration depths are desirable. For a conventional (m =
2m_e, q = 2e) superconductor with λ_L≈ 100
nm, the pressure achieves a maximum value
of ≈ 40 N/mm^2 for separations on the order of
1Å.
The healing length can also be understood as the matter-like counterpart to the London penetration depth. As shown by way of perturbation theory in the SM <cit.>, a uniform medium's first order response to a low-power drive supports the propagation of both longitudinal k⃗_∥ and
transverse k⃗_⊥ plane waves in the fluid velocity field
v⃗∼k⃗_⃗∥⃗,⃗⊥⃗e^i(k⃗·x⃗-ω_∥,⊥t), with frequencies ω_∥,⊥(k⃗) characterized by two different dispersions,
ω_|| = ω_p√(1 + (kξ)^4)
ω_⊥ = ω_p√(1 + (kλ_L)^2),
where ω_p≡ c/λ_L is the plasma frequency and k ≡k⃗ the wavenumber. The high-frequency limits of these relations manifest the longitudinal plane waves as matter-like polaritons ω_∥≈ħ k^2/(2m) and the transverse plane waves as light-like polaritons ω_⊥≈ ck. With this insight, we can thus identify the quasistatic ω_∥,⊥≪ω_p decay length of matter-like excitations with the healing length ξ≈ 1/[k(ω_∥)] and light-like excitations with the London penetration depth λ_L≈ 1/[k(ω_⊥)].
To summarize the results of this study, we have presented a theory of superconductivity akin to the GL theory that is capable of describing the dynamics of superconducting quantum devices well below the critical temperature, and we have used the theory to predict a negative electrohydrostatic pressure between superconducting bodies. Moreover, we have identified an emergent healing length at which this pressure becomes relevant and shown that it is similar to the GL coherence length but represents the matter-like counterpart to the London penetration depth. This work naturally motivates an experimental demonstration of the pressure to determine the theory's validity, but the viability of such an observation requires understanding the magnitude of other forces present at this scale (e.g., Casimir and van der Waals interactions <cit.>), which is left to future work. Our formulation may also be applied to the analysis of magnetostatic systems, such as vortices, and dynamical systems, such as excited Josephson junctions. Finally, the theory may be further developed via second quantization and expanded to incorporate quasiparticle dynamics.
The authors thank Wentao Fan, Zoe Zager, and Terry Orlando for insightful discussions. This work was supported by the US Department of Energy,
Office of Basic Energy Sciences, Division of Materials
Sciences and Engineering, under Award No. DESC0016011, the National Science
Foundation under the Emerging Frontiers in Research
and Innovation (EFRI) program, Award No. EFMA164098, the Defense Advanced Research Projects Agency
(DARPA) under Agreements No. HR00111820046, No.
HR00112090011, and No. HR0011047197, and a
Princeton SEAS Innovation Grant.
|
http://arxiv.org/abs/2307.04468v1 | 20230710103412 | Badgers: generating data quality deficits with Python | [
"Julien Siebert",
"Daniel Seifert",
"Patricia Kelbert",
"Michael Kläs",
"Adam Trendowicz"
] | cs.LG | [
"cs.LG",
"68",
"D.m"
] |
Beyond spectroscopy. II. Stellar parameters for over twenty million stars in the northern sky from SAGES DR1 and Gaia DR3
Gang Zhao2,1
August 12, 2023
=========================================================================================================================
Generating context specific data quality deficits is necessary to experimentally assess data quality of data-driven (artificial intelligence (AI) or machine learning (ML)) applications.
In this paper we present , an extensible open-source Python library to generate data quality deficits (outliers, imbalanced data, drift, etc.) for different modalities (tabular data, time-series, text, etc.). The documentation is accessible at <https://fraunhofer-iese.github.io/badgers/> and the source code at <https://github.com/Fraunhofer-IESE/badgers>.
§ INTRODUCTION
§.§ Context
Applications and systems based on artificial intelligence (AI), machine learning (ML), data mining or statistics (hereafter referred to as data-driven software components) are pieces of software where the decision function is not programmed in a classical way, but is based on one or more models that can be designed either automatically (e.g. through learning or mining) or is based on domain expertise hypotheses (e.g. business rules or statistical tests).
Assessing the quality of such software components is not trivial, as it depends on several factors, such as the quality and quantity of the data, the type of model and how it is built, the application context, and domain expertise <cit.>.
§.§ Motivation
Data quality deficits (e.g., outliers, imbalanced data, missing values, etc.) can have a variety of effects on the performance of a data-driven model. A theoretical understanding of the robustness of data-driven models against specific data quality deficits is available for only a small number of models. Many can only be empirically tested against specific data quality deficits. To make matters worse, data quality deficits are context and application dependent.
Assessing the robustness of a data-driven software components to changes in data quality requires a systematic approach. It also requires the ability to generate specific data quality deficits in order to run tests.
Currently, there are many Python libraries to detect and handle data quality deficits, such as pyod[<https://pyod.readthedocs.io/en/latest/>] <cit.> for detecting outliers, imbalanced-learn[<https://imbalanced-learn.org>] <cit.> for dealing with imbalanced data, autoimpute[<https://autoimpute.readthedocs.io/en/latest/>] for imputing missing values, or great-expectations[<http://docs.greatexpectations.io>] for validation. In addition, the field of deep-learning has provided us with libraries for augmenting training data (see for instance albumentation[<https://albumentations.ai/docs/>] <cit.>). However, there are very few, if any, libraries for generating context-specific data quality deficits.
§.§ Contribution
This paper presents , a Python package dedicated to generate data quality deficits. The aim is to propose a set of standardized and extensible objects (called generators) that can take data as input, infer context information from it, and generate data quality deficits. This package relies on a few design decisions. First, it follows a simple API. Each generator provides a function (where X is the input features and y is either a vector of class labels, regression targets, or an empty one). Secondly, aims to support as many data types as possible (e.g., tabular data, images, text, graphs, etc.). This means relying on mainstream and long-established libraries (such as numpy[<https://numpy.org/>], pandas[<https://pandas.pydata.org/>, or scikit-learn[<https://scikit-learn.org/stable/index.html>]] for tabular data) whenever possible, or following reasonable design decisions. Finally, should be structured and implemented so that it can be easily extended.
§.§ Structure of the paper
The paper is organized as follows. Section <ref> presents a short overview of related work. Section <ref> presents structure and implementation. Section <ref> shows a couple of application examples. Section <ref> discusses limitations, future work, concludes the paper and provides links to the project.
§ RELATED WORK
Assessing the quality of ML applications is a broad area of research. In their paper <cit.>, Zhang and co-authors provide a relatively comprehensive overview of testing activities that apply to machine learning. According to their categorization, we can argue that generating data quality defects falls into the spectrum of test input generation. That is, the generation of specific data with the purpose of evaluating specific aspects of the system under test. The techniques listed range from rule-based to generative AI techniques. Most of the methods presented here are either part of specific test frameworks or have been described in scientific papers. To the best of our knowledge, they are not part of a library dedicated to the generation of quality defects.
Data augmentation techniques are typically used in machine learning to enrich the training data set and help train models to achieve a better goodness of fit, generalize better, and become robust to some data quality issues (e.g., noise). They usually consist of specific transformations (like rotations or scaling for images) that, in principle, should not change the semantic of the data. Recent surveys, like <cit.> for images and <cit.> for text, provide an overview of the different techniques used in data augmentation. In section <ref>, we mentioned existing libraries for data augmentation. Although their main goal is not to specifically generate data quality deficits, data augmentation methods provide interesting algorithms that can be reused for our purpose.
When it comes to generating data quality deficits from existing data, very few papers provide overviews of existing methods and implementations. For instance, <cit.> discusses how to generate outliers from existing data. While the authors seem to have implemented a number of these methods to test them empirically, no implementation is actually available.
<cit.> discusses how to generate missing values. Note that the methods discussed in <cit.> have been implemented in R[ <https://cran.r-project.org/web/packages/missMethods/>] but not in Python.
In summary, there exists a variety of methods for generating data quality defects. But very few are available in a dedicated Python library.
§ PROPOSED SOLUTION: BADGERS
§.§ Overview
Badgers is a Python library for generating data quality deficits from existing data. As a basic principle, badgers provides a set of objects called generators that follow a simple API: each generator provides a function that takes as argument (the input features) and (the class labels, the regression target, or None) and returns the corresponding transformed and . As an example, figure <ref> shows the generate function implemented in the that adds Gaussian White noise to some existing data.
The code is divided into two main modules: and . The module handles all the utilities and things that are generic to all generators such as base classes (in ), decorators (in ), and utilities (in ). The generators themselves are stored under the module, which in turn is divided into submodules, each representing a data type (e.g. , , , , etc.). Each submodule hosts the generators implementations dedicated to one specific data quality deficit (such as outliers, drift, missingness, etc.) for a specific data type. Figure <ref> shows the detail of the current structure.
§.§ Available features
Badgers is currently under development and the list of features will most probably evolve in the near future. For the moment, the focus has been more on tabular data. As shown in the figure <ref>, the module contains five submodules: , , , , and . As their names suggest, each submodule implements generators dedicated to specific data quality deficits. For time series data (), the following submodules are available: and . For text data () only one submodule () is at the moment available.
§.§.§ Tabular Data
badgers.generators.tabular_data.drift
Drift happens when some statistical properties of the data changes over time <cit.>. Two generators are currently available in this module: and . Figures <ref> and <ref> illustrate how these two generators works. Simply put, the randomly shifts values of each column independently of one another. This amounts to translating the data (see Figure <ref>). The input features are first standardized (mean = 0, var = 1) and a random number is added to each column. The applies a similar transformation but for instances belonging to the same class. Here all the instances of a given class are translated, and the translation for different classes is not the same (see Figure <ref>).
badgers.generators.tabular_data.imbalanced
Whereas imbalanced data is usually understood in the context of classification <cit.>, when some classes are over- or under-represented, we use a broader definition. For us, a data set is said to be imbalanced when some statistical properties of the data are over- or under-represented in comparison to a ground truth. Currently, three generators have been implemented: , , and . Simply put, all of these generators sample the original data set with replacement. The samples data points belonging to each class to obtain a specified class distribution (e.g., 10% of class 1, 20% of class 2, and 70% of class 3, see Figure <ref>). The samples data points according to the regression target and expects a function that maps the values of to a sampling probability (see Figure <ref>). Finally, the performs a similar transformation but the sampling probability now depends upon the input features values (see Figure <ref>).
badgers.generators.tabular_data.noise
Currently only one generator has been implemented: . It adds a Gaussian White noise to the input features (see Figure <ref>).
badgers.generators.tabular_data.outliers
Two types of generators are currently available. Generators that directly generate outliers from the input features and generators that first reduce the dimensionality of the input features and then apply an outlier generator from the previous category.
, , , and are generators from the first category. Figures <ref>, <ref>, <ref>, and <ref> illustrate how these four generators create outliers.
The generates outliers by creating data points where each feature i gets a value outside the range ]μ_i-3σ_i,μ_i+3σ_i[, where μ_i and σ_i are the mean and the standard deviation of feature i (see Figure <ref>). The generates outliers by creating data points on an hypersphere of center μ and of radius larger than 3σ (see Figure <ref>). The and both generate outliers by creating data points that belong to regions of low density. The difference between the two generators lies in their low density estimation methods. approximates regions of low density by computing an histogram of the data (see figure <ref>). uses a kernel density estimator (see Figure <ref>).
belongs to the second category. It first standardizes the data and applies a dimensionality reduction technique (so far badgers support scikit-learn transformers that provide an function like ). The outliers are generated using one of the generators mentioned above. Finally the standardization and the dimensionality reduction are inverted.
§.§.§ Time series data
Time series data is currently supported in in the form of numpy arrays and pandas dataframes.
badgers.generators.time_series.noise
Currently only one generator has been implemented: . It adds a Gaussian White noise to the input features . The implementation is the same as in . Figure <ref> illustrates this generator.
badgers.generators.time_series.outliers
Here some existing instances are replaced with outliers. Currently only one generator is implemented: . The creates locally extreme values, by changing the values of some randomly selected data points x(t_i) ∈ X (see Figure <ref>). The values are sampled out of the ]μ_j,Δ-3σ_j,Δ,μ_j,Δ+3σ_j,Δ[ range, where μ_j,Δ and σ_j,Δ are the mean and the standard deviation of the j^th feature computed in the local time interval Δ = [t_i - n, t_i + n].
§.§.§ Text
Text data is currently supported in in the form of lists of strings.
badgers.generators.text.typos
For now, only one generator is implemented: . The randomly swaps adjacent letters in words larger than three letters except for the first and the last letters. As an illustration, the sentence "the quick brown fox jumps over the lazy dog" becomes "the qucik brwon fox jupms oevr the lzay dog" after applying this generator.
§ EXAMPLES
We implemented several examples in the form of notebooks (accessible at <https://fraunhofer-iese.github.io/badgers/> under the tutorials section). The next two figures provide some examples to illustrate the use of a single generator (Figure <ref>), as well as the pipelining of several ones (Figure <ref>).
§ CONCLUSION
This paper gave an overview of , a Python package dedicated to generating data quality deficits.
is in a relatively early development stage. Until now, our focus has been to develop the library structure, the API, as well as some relatively simple generators. The goal was first and foremost to show the potential of such a library.
This library has been used in the context of internal projects. The purpose was first to conduct robustness tests and to augment data. By open-sourcing this library, we hope to provide not only a tool to ease robustness tests of data-driven applications but also to foster discussions on the topic of generating data quality deficits.
Future work will focus both on developing new generators and to test the applicability of this library in the context of data science projects. Discussions and design decisions will be needed to prioritize the work and to decide how to improve the support of other types of data (for instance images, graphs, geolocated data).
Finally, can be installed with the Python package installer [<https://pip.pypa.io/en/stable/>]: .
The full documentation is accessible at <https://fraunhofer-iese.github.io/badgers/>.
The source code for is available under the BSD-3 license at <https://github.com/Fraunhofer-IESE/badgers>.
alpha
|
http://arxiv.org/abs/2307.05642v1 | 20230711101635 | ConFL: Constraint-guided Fuzzing for Machine Learning Framework | [
"Zhao Liu",
"Quanchen Zou",
"Tian Yu",
"Xuan Wang",
"Guozhu Meng",
"Kai Chen",
"Deyue Zhang"
] | cs.SE | [
"cs.SE",
"cs.CR",
"cs.LG"
] |
360 AI Security Lab
China
[email protected]
360 AI Security Lab
China
Corresponding author
[email protected]
360 AI Security Lab
China
[email protected]
360 AI Security Lab
China
[email protected]
SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences
China
[email protected]
SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences
China
[email protected]
360 AI Security Lab
China
[email protected]
As machine learning gains prominence in various sectors of society for automated decision-making, concerns have risen regarding potential vulnerabilities in machine learning (ML) frameworks. Nevertheless, testing these frameworks is a daunting task due to their intricate implementation. Previous research on fuzzing ML frameworks has struggled to effectively extract input constraints and generate valid inputs, leading to extended fuzzing durations for deep execution or revealing the target crash.
In this paper, we propose ConFL, a constraint-guided fuzzer for ML frameworks. ConFL automatically extracting constraints from kernel codes without the need for any prior knowledge. Guided by the constraints, ConFL is able to generate valid inputs that can pass the verification and explore deeper paths of kernel codes. In addition, we design a grouping technique to boost the fuzzing efficiency.
To demonstrate the effectiveness of ConFL, we evaluated its performance mainly on Tensorflow. We find that ConFL is able to cover more code lines, and generate more valid inputs than state-of-the-art (SOTA) fuzzers. More importantly, ConFL found 84 previously unknown vulnerabilities in different versions of Tensorflow, all of which were assigned with new CVE ids, of which 3 were critical-severity and 13 were high-severity. We also extended ConFL to test PyTorch and Paddle, 7 vulnerabilities are found to date.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering Software testing and debugging; Software reliability
ConFL: Constraint-guided Fuzzing for Machine Learning Framework
Deyue Zhang
===============================================================
§ INTRODUCTION
Machine learning (ML) has transformed modern technology by offering efficient solutions for tasks such as image classification, speech recognition, and natural language processing. Alongside advanced algorithms, ML frameworks like TensorFlow, PyTorch, and Caffe serve as essential building blocks for machine learning services. These frameworks equip developers with comprehensive APIs for data processing, model training, and inference, thereby simplifying and expediting the creation of ML applications. As the most widely utilized machine learning framework, TensorFlow has been embraced by millions of developers and underpins machine learning systems at thousands of companies. This includes many of the world's largest machine learning users, such as Google, Apple, ByteDance, Netflix, Tencent, Twitter, and numerous others <cit.>.
Despite their popularity, ML frameworks are not immune to common software vulnerabilities such as stack overflow, heap overflow, and memory corruption issues. For example, TensorFlow has had 432 CVE vulnerabilities to date. Such security problems can lead to the leakage of sensitive information, arbitrary code execution, and the potential compromise of ML systems. As the use of ML applications continues to increase, the risks associated with ML frameworks can be significantly amplified. Therefore, it is essential to identify vulnerabilities in ML frameworks to mitigate these risks.
However, finding vulnerabilities in ML frameworks is challenging due to their complex implementation. A typical ML framework consists of a frontend that provides APIs for developers to ease model development and a backend that performs tasks such as matrix computation, model optimization, or hardware adaptation. Operators, which enable communication between the frontend and backend, lie in the backend but can be invoked from the frontend. As the computation unit for ML frameworks, operators are the main target for vulnerability hunting. However, identifying these operators can be a laborious task. Furthermore, operators may have multiple parameters of arbitrary types and unclear constraints, increasing the difficulty of test input generation. Regular fuzzers, such as Peach <cit.>, AFL <cit.>, and libFuzzer <cit.>, either require significant engineering efforts to translate input grammar or lack knowledge of input constraints, which makes them limited in testing ML frameworks.
Recently, a line of work has made progress in fuzzing ML frameworks. For instance, DocTer <cit.> extract input constraints from API documentation and uses them to guide the test input generation for fuzzing machine learning API functions. FreeFuzz <cit.> executes collected code or models from open source with instrumentation to trace dynamic information for each covered operator, then leverages this information to perform fuzz testing. DeepRel <cit.> builds on FreeFuzz to share mined valid inputs between similar functions. However, DocTer, FreeFuzz, and DeepRel partially or entirely depend on API documentation, which may not always be available or well-maintained. As a result, these approaches might not cover all functions in a library's APIs. Furthermore, not every function may be invoked in open-source code, highlighting the need for new input constraint inference techniques that do not rely on documentation or high-quality sample usages. Since API documentation can be incomplete, outdated, or inconsistent with code, the derived constraints may not be comprehensive enough, leading to less efficient testing. For instance, DocTer achieves only a 33% valid input generation rate. IvySyn <cit.> automatically identify DL kernel code implementations and adding fuzzing hooks to perform mutation-based fuzzing with type-aware mutations. Once a set of crashing kernels is obtained, IvySyn synthesizes high-level code snippets that can propagate the offending inputs through high-level APIs. However, IvySyn's approach of synthesizing code snippets may not always be effective in producing evidence of the vulnerability, especially if the code is complex or the vulnerability is deeply embedded in the system.
In this work, we introduce ConFL, an approach that addresses the limitations of previous methods by automatically extracting operator constraints from source code. We choose the Python frontend as the entry point to test operators in backend C/C++ kernel code. ConFL first traverses all the operators in the source code, collecting information such as operator name, operator call chain, and parameter names. Next, ConFL extracts constraints from the source code using static taint analysis, which can be categorized into four types: environmental constraints, dependency constraints, validation constraints, and logical constraints. ConFL then constructs two types of fuzzing templates using the operator information and constraints: data templates specify the shape, type, and value of an operator's parameters, while control templates determine the control flow of the operator. Guided by these constraints, ConFL generates high-quality, structurally and semantically valid test inputs to examine operators.
To demonstrate the effectiveness of our approach, we primarily evaluate its performance on TensorFlow. ConFL outperforms DocTer, FreeFuzz, DeepRel, and IvySyn in various aspects. ConFL demonstrates a higher code coverage, indicating its effectiveness in generating valid inputs and exploring a broader range of code paths. Furthermore, the success rate of ConFL is consistently higher, as it is able to execute more test cases without parameter errors or exceptions, ultimately leading to better vulnerability detection. Most notably, ConFL discovers 84 previously unknown vulnerabilities in different versions of TensorFlow, all of which have been assigned new CVE IDs, including 3 critical-severity and 13 high-severity vulnerabilities. We have also extended ConFL to test PyTorch and Paddle, uncovering 7 vulnerabilities to date.
Contributions. We make the following contributions.
* Efficient operator collection and constraint extraction: ConFL effectively collects operators from machine learning frameworks, extracting environmental constraints, dependency constraints, validation constraints, and logical constraints to build comprehensive constraint trees.
* Enhanced test data generation: Utilizing the extracted constraints, ConFL generates test input in a more guided and efficient manner, leading to a higher number of successful executions and improved code coverage compared to random generation or state-of-the-art fuzzers.
* Increased vulnerability detection: By efficiently generating valid inputs, ConFL effectively identifies vulnerabilities in ML frameworks, enhancing the security and robustness of machine learning frameworks.
§ BACKGROUND & PROBLEM STATEMENT
§.§ Typical Architecture of ML Framework
An ML framework serves as a platform that simplifies the process of creating, training, and deploying machine learning models by providing pre-built libraries and tools for developers. The core functions of an ML framework, which involve mathematical algorithms for processing data and making predictions, are implemented in the kernels. Kernels are the central components of an ML framework that handle the low-level operations required for the framework, and developers access these functions through frontend interfaces.
For instance, TensorFlow, a popular ML framework, comprises a frontend and a backend. The frontend offers programming interfaces such as Python, Java, and C++, and constructs the computation graph. The backend, on the other hand, provides the runtime environment and executes the computation graph. It comprises four layers, namely the runtime layer, computation layer, network layer, and device layer. The runtime layer receives, constructs, and orchestrates the computation graph, while the computation layer offers kernel implementations of operators. The network layer implements inter-component communication, and the device layer supports various devices such as CPU, GPU, TPU, among others.
§.§ Operators of Machine Learning Libraries
In this paper, our focus is on detecting vulnerabilities in operators used within machine learning frameworks. Operators serve as functions or operations that perform mathematical calculations on tensors or arrays of data, and are utilized to construct machine learning models. These operators form the building blocks of such models, and are responsible for performing tasks such as Conv3D for convolution or MaxPool for pooling. For efficient computation, operators are often developed and implemented in C/C++, while providing a Python interface to users.
We take the operator named (LARM) <cit.> and (BTCBFS) <cit.> provided in TensorFlow as examples. After analyze the description file in source code, We found that the operator LARM takes 9 parameters to load a tensor with name old_tensor_name from the checkpoint and BTCBFS takes 9 parameters to calculate gains for each feature and return the best possible split information for the feature.
§.§ Problem Statement
Fuzzing operators pose numerous challenges, making the process significantly more intricate than fuzzing conventional software systems. One key challenge is the complexity of operator functions, which include various tasks such as file management and computation. Additionally, implementations of operators differ across architectures (e.g., CPU, GPU), which further complicates testing against all operators. Another challenge lies in the complexity of input spaces for operators. They often deal with high-dimensional input spaces, like images or text sequences, making it difficult to generate meaningful and diverse inputs for fuzz testing within these spaces.
Motivating Example. Table 1 lists two operators that require multiple parameters, and these parameters have different types. The primary parameter type is called Tensor, which is a multi-dimensional array with a uniform type, such as int, float, or string.
We manually wrote the fuzzing templates to test these operators with random data. We spent a considerable amount of time collecting the data type of the operator parameters, which include float, int, char, string, and bool. We also considered the computing characteristics of the ML framework, including the list type and the Tensor type. An example of how to fill the BTCBFS operator with test data is given below:
[language=Python]
tensorflow.raw_ops.BoostedTreesCalculateBestFeatureSplit(
node_id_range=[1,7],
stats_summary=[[[[2.0]], [[3.]], [[3.]]]],
l1=[0.0],
l2=[0.0],
tree_complexity=[1.0],
min_node_weight=[0.7],
logits_dimension = 2,
split_type = 'equality'
)
font=footnotesize
figureCases for testing BTCBFS.
However, upon running the BTCBFS test script, we encountered a type error message `'
When testing BTCBFS, we narrowed the range of random data generation by analyzing the range of operator parameters in the document. For example, stats_summary is four-dimensional, and logits_dimension is an integer larger than 0. Although relatively normalized test data is generated, it still cannot be executed because valid data cannot be generated. Once the test input is invalid, the computing process will be terminated in the Python frontend, making it difficult to deeply test the specific code of the operator.At the same time, the word in the error message is not in the operator parameter name list, which makes it more difficult to adjust the test data.
§.§ System Overview
In this paper, we focus on generating semantically valid test inputs for operators. Since extracting constraints from API documentation is incomplete and requires domain knowledge, we opt to automatically extract operator constraints from the source code. Our goal is to generate valid test inputs by leveraging constraints to pass parameter validation detection successfully in the C++ backend. To achieve this, we have developed a prototype tool called ConFL, which consists of three modules, as shown in Figure <ref>.
Operator Collection. ConFL aims to test operators, so the first step is to collect operator information. This module automatically traverses all the operators of an ML framework and collects information including operator name, operator call chain, and parameter names. Additionally, it can construct call chains to be used in fuzzing. This module is further explained in section 3.1.
Constraints Extraction. In this module, we extract constraints from the source code and categorize them into four categories: environmental constraints, dependency constraints, validation constraints, and logical constraints. Environmental and dependency constraints are used to restrict the execution context of operators to ensure they have access to the necessary resources for execution. Validation and logical constraints are used to generate valid and diverse test inputs to enable execution of the deeper code of the operator. We will provide a detailed description of this module in Section 3.2.
Template Generation. Using operator information and constraints, ConFL generates fuzzing templates, which provide an abstract representation of the operator before specific testing. There are two types of templates: data templates and control templates. Data templates specify the shape, type, and value of parameters, while control templates specify the control flow of the operator. By using the operator information, ConFL builds a skeleton for the template, and then relies on constraint information to build dependencies for different parameters. This module will be described in detail in section 3.3.
§ METHODOLOGY
§.§ Operator Collection
Rationale for collecting operators in Python front-end code.
As previously mentioned, ML frameworks like TensorFlow can be logically divided into two parts: the frontend, which provides interfaces in various programming languages for developers, and the C/C++ backend, which aims to enhance computing efficiency. We have selected the Python frontend as the entry point for testing the backend C/C++ code for several reasons:
* It is consistent with the practical scene. Most developers build neural networks for model training and inference with the Python frontend interfaces.
* Python offers excellent language features. Unlike IvySyn, which tests from the C++ side, ConFL chooses the Python side as the operator's input. The Python frontend has rich types, such as int, float, tensor, etc., which can guide the generation of valid parameters. Additionally, writing C/C++ harnesses for each operator is challenging, whereas generating operator templates automatically with Python's reflection mechanism saves a considerable amount of time.
ML frameworks connect Python code with C/C++ code using pybind11, SWIG <cit.>, and other methods, which are loaded into the Python runtime as modules. By selecting the Python frontend as the entry point, ConFL automatically traverses functions, classes, and modules with the help of Python's reflection mechanism, obtaining operator package directories. With operator names and package directories, ConFL analyzes the operator's signature and extracts parameter names, and generates an operator test template by concatenating this information.
Algorithm. Algorithm <ref> demonstrates how ConFL collects operators and generates the test templates in detail. In the process of collecting operators, a tree structure of Module-Operator is constructed in line 1, which represents the call path of an operator from a leaf node to the root. ConFL collects modules in . From line 5, we iteratively traverse all available modules. First, get modules from the parent module in line 6. Then, modules are added to the tree in lines 9-15 to obtain the full call path for each operator. Since there may be multiple call paths for an operator or a module, when a duplicate operator is detected, the one with the shortest call path is preserved, as shown in lines 10-12. Finally, return which contains all modules in the ML library. After collecting modules, ConFL gathers operators from modules with function . It traverses from line 18, gets operators from each module in line 20, and adds operators to the tree in lines 22-27. Similar to , duplicated operators are taken into consideration in lines 23-25. Eventually, after collecting all operators, traverse the Module-Operator tree and generate harness for each operator.
As a result, there are 9689 operators in total after the initial collection. Without any omission or manual writing, ConFL can automatically generate test templates for all interfaces in TensorFlow.
Further selection and deduplication. ConFL automatically analyses the security of C/C++ backend through the Python frontend interface. Therefore, ConFL will remove the operators whose computation can be accomplished in the frontend. For example, doesn't execute codes in C/C++ backend, so it will not be tested later.
With the Python's function, we obtain unique interfaces identified by the memory addresses. However, we have observed that even when different Python interfaces possess distinct implementations, their corresponding C function call chains might be identical. To address this, ConFL further deduplicates operators by considering both the operator parameters and their call chain. As illustrated in Figure 3, the three interfaces shown share the same parameters, and the second and third interfaces have identical call chains. Consequently, we only generate test templates for the first two interfaces.
[language=Python]
OP: tensorflow.reshape(tensor=[1,2],shape=[1,2])
Path: Dispatch -> TFE_Py_FastPathExecute -> TFE_Py_Execute
OP: tensorflow.raw_ops.Reshape(tensor=[1,2],shape=[1,2])
Path: TFE_Py_FastPathExecute -> TFE_Py_Execute
OP: tensorflow._compat.readers.array_ops.gen_array_ops.reshape(
tensor=[1,2],shape=[1,2])
Path: TFE_Py_FastPathExecute -> TFE_Py_Execute
font=footnotesize
figureFunction call chain in C++
Adaptility. In the process of generating operator test templates, different ML framework frontends may have different implementation types. Taking the Python frontend as an example, there are functions or classes. For functions, the calling statements will be automatically constructed. For classes, an instance of the class will be generated first, and then the calling codes.
§.§ Operator Constraints Extraction
The runtime behavior of an operator is dependent on the input data and the environment in which it is executed. ConFL meticulously extracts constraints within operators to ensure comprehensive coverage. We define the conditions necessary for an operator's successful execution as constraints and classify them into four types, including environmental constraints, dependency constraints, validation constraints and logical constraints. By thoroughly examining these constraints, ConFL achieves higher code coverage and uncovers vulnerabilities hidden in deep execution paths (described in detail in Section 4).
§.§.§ Environmental Constraints
In ML framework, there are various execution options available, and we refer to the constraints that determine the choice of execution mode as environmental constraints. For example, TensorFlow primarily offers two modes of executing operations: Eager Execution and Graph Execution. Eager execution is an imperative programming mode in which TensorFlow operations are executed immediately as they are called from Python. This mode is more intuitive and flexible, allowing for easier debugging and experimentation. With eager execution, users can work with TensorFlow operations just like any other Python operations, and there is no need to explicitly build a computational graph before executing it. Graph execution, also known as static computation graph, is the traditional mode of execution in TensorFlow. In this mode, users first define a computational graph that represents their model or algorithm, and then TensorFlow executes the graph in an optimized manner using a session. Graph execution offers performance benefits through various optimizations like parallelism, distributed execution, and efficient memory allocation.
Since TensorFlow 2.0, eager execution is the default mode, but users can still use graph execution through the decorator, which converts user's Python code into a static graph. This allows users to leverage the benefits of graph optimizations while keeping the flexibility of eager execution.
Besides, Tensorflow use a domain-specific compiler named Accelerated Linear Algebra(XLA) for linear algebra to accelerate computaion.
§.§.§ Dependency Constraints
Dependency constraints refer to the parameter constraints that an operator must satisfy before actual execution. If the types and data of the parameters do not meet the operator's requirements, the execution of the operator will not commence. We divide dependency constraints into resource-dependent constraints and operation-dependent constraints.
Resource-dependent constraints. Various types of parameters are required during the computation process of ML framework operators. In TensorFlow, besides simple types such as int and float, there are also special types like resource and variant. The resource type represents a handle to a mutable, dynamically allocated resource, while the variant type represents data of an arbitrary type<cit.>. Based on the composition characteristics of operator parameters, we classify the types into two categories according to their complexity. All types in TensorFlow are shown in Table <ref>.
* Basic types: Scalar like int, float, complex, char and string.
* Composite types: Basic tensor, which is a combination of basic types. Resource tensor, such as a file handler or a series of codes.
In the code repositories of ML frameworks like TensorFlow and Paddle<cit.>, operator description files are typically used to dynamically generate code at compile time or track historical changes in operator code. TensorFlow's operator description file is called (located in source code), which contains the operator name, parameter name, and type. By parsing the aforementioned parameter information, ConFL can obtain the types of all parameters. For example, is a tensor of type, and is a tensor of type. Additionally, the return value types can be extracted, such as the result of LoadAndRemapMatrix being a tensor of type.
After parsing the type and value information of each parameter of LoadAndRemapMatrix, ConFL generates the following intermediate description:
[language=Python]
'ckpt_path': ['DT_STRING'],
'old_tensor_name': ['DT_STRING'],
'row_remapping': ['DT_INT64'],
'col_remapping': ['DT_INT64'],
'initializing_values': ['DT_FLOAT'],
'num_rows': ['int'],
'num_cols': ['int'],
'max_rows_in_memory': ['int']
font=footnotesize
figureThe parameters' type of LoadAndRemapMatrix.
We find that there are dependencies between different operators, which means the output of one operator is used as the input of another operator. However, such construction of parameters is not reflected in the documentation or source code. We call this constraint as resource-dependent constraints. and we save the output type of the successfully executed operators. By analyzing the operator type, we can abstract the resource-dependent constraints to construct correct parameters.
Different operators may depend on various types of file data, making manual generation a labor-intensive task. For instance, LoadAndRemapMatrix requires loading a model file in ckpt format during the execution process. Since the input parameter data type is string, it represents the storage path of the model file. If the data of the string type is mutated, the operator cannot read the model file data during execution, resulting in execution failure.
To address this issue, we propose to automatically extract relevant file pre-constraints with the assistance of test cases. TensorFlow contains an extensive collection of test cases. When testing a specific operator, the test case will include the code for the pre-deployment environment, such as generating the specified file. The code for the LoadAndRemapMatrix test is stored in checkpoint_ops_test.py, which contains the following code:
[language=Python]
class LoadAndRemapMatrixTest(test.TestCase):
def setUp(self):
...
matrix = variable_scope.get_variable(
'matrix',
dtype=dtypes.float32,
...)
save = saver.Saver([matrix])
save.save(...)
font=footnotesize
figureLARM's testcase.
By instrumenting the LoadAndRemapMatrix operator and monitoring the execution path of the drive letter, we can identify the corresponding file generated when the test case is executed.
Operation-dependent constraints. Operators are the smallest computing units in ML frameworks. According to our analysis, most operators have few calling dependencies, allowing for individual testing. However, some parameters may be of special types that require results generated by other operators as their inputs. ConFL identifies operators such as , , and through keyword matching, extracts the operator entity, and tests the operators with the same entity as a single group. For instance, Stack-related operators, like , , and , all share the Stack main body and construct relevant data for testing through built-in test sequences.
Different operators may operate on the same entity, such as Stack. If testing is performed only for a single operator, the operation dependency might not be satisfied. For example, the Push operation first requires initializing a Stack, while the Pop operation needs both the initialized stack as a parameter and data in the Stack, requiring the execution of the Push operation. We refer to the preceding operations that ensure the smooth execution of operators as operation-dependent constraints.
Operator names typically describe their functions semantically, such as StackClose, StackPush, and StackPop. By conducting part-of-speech analysis, we can identify the operations and entities within the operator name and consider different operators acting on the same entity as a set. In terms of operator execution sequence, if a test case detects that operators in the set are called in a specific order through the hook method, the relevant sequence is saved. If no relevant test exists in the test case, operation dependencies are determined through random execution.
During part-of-speech determination, since the position of a word affects the part-of-speech judgment, we shift the sequence after word segmentation to the left and save the verb part-of-speech tokens identified in all operators. After excluding the verb tokens, we assess the operator's name, and ultimately cluster the operators of the same subject.
[language=Python]
LookupTableFind ['Find']
LookupTableRemove ['Remove']
ReaderReadUpTo ['Read']
ReaderRestoreState ['Restore']
Stack ['Stack']
StackClose ['Stack', 'Close']
StackPush ['Push']
font=footnotesize
figureThe operator's name and operation.
§.§.§ Validation Constraints
Environmental constraints and dependency constraints are mainly used to arrange the execution environment of the operator, so that the operator can have executable resources, but the execution conditions of the operator is also related to the constraints of input parameters. For example, the explicit shape, type and value constraints of every parameter in BTCBFS are obtained with the above methods. However, we find that there are dependencies between parameters. For example, the third value of in operator BTCBFS needs to be larger than .
Validation constraints in operators refer to the parameter's conditions or rules that must be satisfied for the code to execute correctly and produce the expected output. These constraints play a crucial role in ensuring data integrity, maintaining API stability, and preventing errors or exceptions during the execution of a operator. If input parameters do not satisfy semantic rules, test cases often fail the semantic checks and falter in the shallow code of the operator. Consequently, only a small portion of inputs generated from generic generation-based fuzzing reaches the operator execution stage, where deep bugs typically hide, leaving a large part of the operator code unreached.
In this section, we propose a constraint extraction technique for operators. It can analyze the source code of operators, locate semantic checking statements, extract specific values compared with parameters, and ultimately perform as validation constraints. The validation constraints can be categorized into type constraints and numerical constraints, which serve as a guide for generating valid parameters in the subsequent stages of testing. The process consists of three main steps: first, compiling the source code into LLVM's<cit.> intermediate representation (IR) using clang<cit.>.; second, specifying taint sources, propagations, and sinks; and finally, extracting constraints at the taint sinks.
In Tensorflow, operators are implemented by extending OpKernel and overriding the compute method. Operator parameters are divided into and , as indicated by the 172 symbol in Figure <ref>. are tensors with mutable values, while remain constant from step to step. The operator receives the parameter in the constructor, and the parameter in the compute method. As a result, we select (dotted box in Figure <ref>) and (solid box in Figure <ref>) as sources. The first parameter of the function is the name of the Python positional parameter, while the second parameter represents the specific parameter name. We have identified seven types of source points:
[language=C]
context->input(INDEX)
context->input("VARNAME", VAR)
context->input_list("VARNAME", VAR));
context->mutable_input(INDEX, _);
context->mutable_input(VARNAME, VAR, _));
context->mutable_input_list("VARNAME", VAR));
context->GetAttr("VARNAME", VAR));
font=footnotesize
figureThe operator's name and operation.
As the operator primarily computes using input parameters, the return value of the function that retrieves inputs is designated as the taint source. Instructions such as , , and act as the primary targets for taint propagation analysis. When a tainted variable is present in the operands of an instruction, the return variable of that instruction is marked as tainted. Taint propagations are denoted by 173 in Figure <ref>.
Identifying taint sinks is a critical aspect of the taint analysis method used in this approach. After extensive analysis, it was found that most machine learning frameworks utilize macros to evaluate the validity of operator parameters within the source code. Examples include in TensorFlow, in PyTorch, and in Paddle. The operator BTCBFS employs the macro to determine the relationship between the and parameters. If the test data fails to meet the constraints, an error is reported and the process is terminated.
The macro's second parameter is an expression, as indicated by the red background in Figure <ref>, while the third parameter is an error output statement. When the expression is false, an output function is called to print the error statement. In reality, the evaluation of an expression in IR is represented as a conditional jump instruction, with its jump target basic block containing an error output function or a check function. As a result, taint sinks are identified based on the following three characteristics:
* It is a conditional jump instruction.
* The jump condition contains tainted variables.
* There is either an error output function or a check function in the jump target basic block.
At taint sinks that satisfy the above characteristics, the jump conditions are extracted as operator constraints. Finally, simplify and revise the extracted constraints to a readable form.
Validation constraints, as indicated by the red background in Figure <ref>, are related to parameter' validation checking. As demonstrated in the example in Figure <ref>, 174 represents validity detection. If the detection fails, the subsequent calculation functions cannot proceed as expected.
For validation constraints, ConFL not only extracts the topmost linear sequence but also analyzes the loop structure, as demonstrated in Algorithm <ref>. When it is determined that the loop body contains only valid detection statements, these validity statements are extracted as constraints.
§.§.§ Logical Constraints
We refer to the constraints derived from an if-else branch statement in operators as logical constraints. Logical constraints(brown background in Figure <ref>) are more present in branch judgment. For example, The detection at 2 is a logical judgment and is located within the branch judgment, which is related to the specific code logic function.
ConFL adds support for logical constraints by constructing a constraint tree. As in the case of 2 in the example, ConFL first determines whether there is a taint in the if statement. If so, it adds the constraint to the constraint tree and then analyzes the legal judgment statement within the if statement block.
Using a constraint tree, ConFL can choose one of the branches to generate fuzzing templates. However, the extracted constraints are at the IR level, which corresponds to the backend C/C++ code. Since ConFL directly calls the Python frontend interface, Python-level constraints are needed as guidance for data generation. In other words, there is a gap between IR constraints and Python parameters. Therefore, it is crucial to elevate the IR constraints to the Python frontend form, making them easily recognizable during fuzzing.
Consider the LARM operator as an example; constraints generated by ConFL are illustrated in Figure <ref>.
[language=Python]
len(ckpt_path_t) == 1
len(row_remapping.shape()) == 1
len(row_remapping) == num_rows
len(col_remapping) == num_cols
font=footnotesize
figureConstraint information of the operator LARM.
The constraints above include both shape requirements of parameters and dependencies between parameters. For instance, must be one-dimensional, and its length should equal the value of .
§.§ Fuzzing Template Generation
Based on operator information and operator constraints, ConFL generates operator fuzzing templates. These templates do not contain specific fuzzing data for the operator parameters but instead build a test skeleton. In the actual test process, ConFL selects corresponding test data according to the template. The templates are divided into control templates and data templates based on their functions.
Control Template. The control template primarily sets the operator's executable environment, parameter position, parameter type, and other information. By performing topological sorting according to the constraint tree, ConFL first generates a single-parameter template and then creates other parameter templates that depend on this parameter.
Furthermore, when generating control templates, we propose a grouping test for data multiplexing. This is because different Python interfaces may share the same C/C++ backend in TensorFlow. For example, both and in Python correspond to in C++. This is due to the registration mechanism in ML frameworks, which adds various operators, such as for PaddlePaddle, for MindSpore, and for TensorFlow. With such a registration method, ConFL establishes correspondence between operators in different languages, enabling parameter data reuse. As previously mentioned, and share the same parameters: , , and .
Data Template. First, ConFL generates a data template and fills it with parameter information in the form of name-value pairs. ConFL replaces the placeholder with a symbol of the corresponding shape or type according to the explicit information extracted and generates specific values based on the symbol.
By saving the shape and type symbols representing the data, we categorize the generated data to prevent creating too many duplicate parameters. Given the numerous computational steps in ML frameworks, ConFL selects values from a special value set (e.g., boundary value, zero, big integer) when generating specific values. This approach reduces the range of generated parameters and prevents different data from executing the same path while preserving vulnerability detection capability. ConFL then verifies if the parameters satisfy the constraints. If not, it takes targeted modification measures, making simple modifications to the shape or value while retaining the original data characteristics. This lightweight approach saves effort compared to regenerating. Since parameters are checked and modified by explicit and implicit constraints, the operator execution success rate significantly improves.
Based on the constraints in Figure <ref>, ConFL generates a template containing "'col_remapping': [DI]*num_rows". "DI" is a data template conforming to the parameter col_remapping type, representing the use of integer numbers in specific tests. The length of this parameter must equal num_rows. By applying this template, the following test data in Figure <ref> can be generated.
[language=Python]
para =
'ckpt_path': 'bundle_checkpoint',
'old_tensor_name': 'some_scope/matrix',
'row_remapping': [1],
'col_remapping': [2147483649]*1073741824,
'initializing_values': [],
'num_rows': 1,
'num_cols': 1073741824,
'max_rows_in_memory': -1,
font=footnotesize
figureParameters generated for the operator LARM based on the constraints.
§ EVALUATION
§.§ Implementation
The operator collection is implemented using 1K lines of Python code, which parses the operator description and analyzes the source code. In the process of obtaining interfaces, we modified the Python interpreter - CPython, to monitor the function call chain and determine whether the interface calls C functions. We chose to modify CPython to determine if a C function is called rather than analyzing pybind11 because some interfaces use SWIG, and different function names can be passed to the same C interface, such as the function .
In the constraint extraction part, we use 500 lines of Python code to extract environmental constraints and dependency constraints. To extract validation constraints and logical constraints, we use 1K lines of C++ code to implement path-insensitive taint analysis based on LLVM.
Additionally, we use 2K lines of Python code to implement operator test template generation and operator test input generation.
This section evaluates TensorFlow 2.8 using the method introduced in Chapter 3, primarily focusing on the following four aspects:
* How effective is ConFL in collecting operators?
* Are operator constraints helpful for parameter generation?
* Can ConFL find vulnerabilities in real-world applications?
The machine used for running the experiments is equipped with Intel Xeon E5-2630 2.20 GHz CPU, Tesla P4 GPU, 128GB RAM, Ubuntu 20.04 LTS, and Python3.8.
§.§ Effectiveness of Operator Collection
Unlike collecting operator information from documents, ConFL collects operators by analyzing the codes of ML framework itself, and the operator can be directly called for fuzzing. When adapting to the newest version, ConFL automatically extracts operators of the version without re-collect public code segments or re-analyze operator documents.
ConFL primarily tests the raw_ops module, consisting of 1,355 operators. Out of these, 24 operators are deprecated or meaningless, like the operator, leaving 1,331 valid operators in the raw_ops module for testing.
Out of the remaining 1,331 operators, 65 depend on TPU, including operators like . Although ConFL is theoretically capable of detecting these operators, hardware limitations prevented their inclusion in our experiment. Consequently, we selected 1,266 non-TPU-dependent operators as test targets.
Additionally, other modules can be tested by ConFL, such as the IO module, where CVE-2020-26269 was found.
Figure <ref> displays the distribution of operator parameter counts, with 976 operators having 2 to 6 parameters. The most common count is 3 parameters, found in 232 operators. Four operators have over 20 parameters, and on average, each operator has 5 parameters.
Answer to RQ1: ConFL collects operators with a total number of 1,355, of which 1,331 operators are valid. After excluding operators that rely on hardware, We randomly select 400 of the 1,266 operators as test targets.
§.§ Effectiveness of Operator Constraints Extraction
We collect 6 environmental constraints through expert experience. Although the number of environmental constraints is relatively small, they are very effective. When compared to tests that do not apply these constraints, more new code can be executed, and vulnerabilities can be found. ConFL extracts 23 dependency constraints after analyzing the operator's information. By using dependency constraints, some operators can be executed successfully. The success of an operator's execution is directly related to whether the parameters can pass validation verification. Due to the complexity of the types and numbers of operators, 1519 validation constraints are extracted. Similarly, the introduction of functional constraints allows us to build a complete constraint tree that covers a sufficient amount of code.
We classify the validation constraints into two types: constraints related to a single parameter and constraints among parameters. Moreover, we describe a parameter from various perspectives. represents the number of dimensions, refers to each dimension of a tensor, is the number of elements, describes the specific value, while indicates the type of a parameter. As shown in Table <ref>, the ndim-type has the largest proportion of single parameter constraints. For example, in , we obtain the constraint “dimension.ndim == 0" to restrict the ndim of parameter . In Table <ref>, the rows and columns specify the attribute constraints among parameters. For instance, the “10" indicates that there are 10 constraints between shape and ndim, such as “input.ndim > block_shape.shape[0]" for operator .
§.§ Effectiveness of Constraint-guided operator input Generation
In this experiment, we set up two comparison on code coverage: Compared with random generation (Atheris) and compared with state-of-the-art(SOTA) fuzzers. With the consideration of various SOTA fuzzers can not cover all opertors, we first conduct a experiment on comparison with Atheris in 1,266 operators. Then we select 400 operators that all the SOTA fuzzers can cover commonly, then conduct another experiment on comparison with SOTA fuzzers.
Compared with random generation. We separately employ Atheris and ConFL to generate 10,000 test inputs for each operator, and record the number of successful executions. We define a successful execution as one that triggers either a crash or normal exit, while an unsuccessful execution is one that fails due to a parameter error, such as a Python code exception. The test result show that Atheris achieves a total of 669,249 successful execution times for all tested operators, while ConFL reaches 3,534,170 times. The increase rate amounts to 428.08%, demonstrating that ConFL significantly improves the validity of the generated inputs.
Furthermore, we examined the relationship between the increase rate and the number of parameters. We organized the operators based on their parameter count and assigned an ID to each. Figure <ref> illustrates the increase rate of ConFL compared to Atheris. The average increase rate for all operators is 625.94%; operators with 5 parameters experience the highest growth rate at 1,417.94%. Although the increase rate declines as the number of parameters grows, ConFL still outperforms Atheris significantly. This suggests that when an operator has few parameters, it can be successfully executed using random data generation. However, as the number of parameters rises, the limitations of random generation become more evident, and the benefits of constraint-based generation grow more pronounced.
Concerning code coverage, Figure <ref> displays the results. We fuzzed 1,266 operators for 20 minutes. Through the code coverage analysis, we discovered that the coverage state stabilizes at 20 minutes, with Atheris covering only 4,929 lines of code. This suggests that the majority of inputs are invalid, causing stagnation in the validation checking. In contrast, using constraints, ConFL's code coverage not only increased rapidly in the first 5 minutes but also sustained steady growth during the subsequent testing. Within the limited time, ConFL increased the coverage by 228.83% compared to Atheris, demonstrating the efficiency of ConFL in generating valid inputs.
Compared with state-of-the-art fuzzers. We compare ConFL with state-of-the-art (SOTA) fuzzers, including DocTer, FreeFuzz, DeepRel, and IvySyn. Since some SOTA fuzzers cannot cover all 1,226 operators, we select 400 operators that all the SOTA fuzzers can commonly cover for fairness as the benchmark, using Atheris as the baseline. Regarding test time settings, we utilize all the fuzzers to test each operator in the benchmark for 20 minutes. Subsequently, we record the total code coverage of all operators in the benchmark.
As depicted in the figure<ref>, the results of the testing show that ConFL consistently outperforms DocTer, FreeFuzz, DeepRel, and IvySyn in code coverage metrics. The figure illustrates the code coverage achieved by each fuzzer, with ConFL achieving significantly higher coverage compared to its counterparts. This indicates that ConFL is more effective in generating valid inputs and exploring a broader range of code paths.
Answer to RQ2: Constraints are helpful for generating valid inputs of operators, and largely improve the success rate of execution. Additionally, our constraint-based approach significantly increases code coverage of operators compared with state-of-the-art fuzzers.
§.§ Effectiveness of Vulnerability Detection
The vulnerability detection results of ConFL when applied to the TensorFlow framework are presented in Table <ref>. ConFL successfully identified a total of 84 vulnerabilities within the TensorFlow framework, all of which have been confirmed and assigned CVE numbers. A selection of representative vulnerabilities is detailed in the table, while a comprehensive list can be found in <cit.>.
Vulnerability Type Analysis. As depicted in Table <ref>, the 84 discovered vulnerabilities are classified according to their types, with the top five types being Out of Bound (OOB), Null Pointer Exception (NPE), Floating Point Exception (FPE), Integer Overflow (IOF), and Use After Free (UAF). Here, we provide a brief overview of each vulnerability type along with corresponding examples:
* Out of Bound (OOB): OOB vulnerabilities occur when an operation accesses memory outside of its intended bounds, potentially leading to data corruption, crashes, or security breaches. One such example is CVE-2021-41226, which corresponds to the TensorFlow operator .
* Null Pointer Exception (NPE): NPE vulnerabilities arise when a program attempts to access or manipulate an object via a null pointer reference, potentially causing unexpected behavior, crashes, or security issues. A notable instance is CVE-2021-41209, associated with the TensorFlow operator .
* Floating Point Exception (FPE): FPE vulnerabilities involve errors in floating-point operations, such as division by zero or overflow, which can result in crashes or incorrect calculations, impacting the system's reliability. An example of an FPE vulnerability is CVE-2022-21725, related to the TensorFlow operator .
* Integer Overflow (IOF): IOF vulnerabilities occur when an integer operation produces a value too large or too small to be represented by the integer type, potentially leading to data corruption, crashes, or other unintended consequences. An instance of an IOF vulnerability is CVE-2022-21733, corresponding to the TensorFlow operator .
* Use After Free (UAF): UAF vulnerabilities happen when a program continues to use a memory object after it has been freed, potentially resulting in crashes, data corruption, or security exploits. An example of a UAF vulnerability is CVE-2021-37652, linked to the TensorFlow operator .
In summary, the application of ConFL to the TensorFlow framework led to the identification of 84 vulnerabilities, spanning a range of types. By understanding and addressing these vulnerabilities, developers can work towards enhancing the security, stability, and reliability of the TensorFlow framework.
Causality Analysis. By analyzing 84 vulnerabilities detected by ConFL in TensorFlow, we have identified the causality of these vulnerabilities and classified them into three categories: shape, type, and value, as displayed in Table <ref>.
Regarding shape, a zero-dimensional vector may lead to NPE, OOB, and FPE vulnerabilities, such as CVE-2021-37672. Alternatively, a large value may cause OOB and NPE vulnerabilities, exemplified by CVE-2021-37655. In terms of type, an incorrect tensor type value can trigger Denial of Service (DoS) vulnerabilities, as seen in CVE-2020-26268. Concerning value, tensor data or parameter values of zero can result in FPE and OOB vulnerabilities, such as CVE-2022-21725; large integer values can lead to OOB, IOF, and Type Confusion (TC) vulnerabilities, as in the case of CVE-2022-21727; and negative values can cause OOB and IOF vulnerabilities, as demonstrated by CVE-2022-21733.
We find that current ML frameworks prioritize performance and functionality over security, lacking comprehensive user input validation, particularly for empty arrays and empty handlers. Additionally, the computational nature of machine learning algorithms results in frequent floating-point and integer overflow issues. Lastly, the interdependent relationships between ML framework operator parameters may generate valid parameters that still impact other parameters and cause computational problems.
Case Study. In the following, we discuss a vulnerability example to illustrate how ConFL can effectively generate valid inputs, enabling efficient detection of vulnerabilities in real-world ML frameworks.
ConFL identified an out-of-bound read vulnerability in the BTCBFS operator. The vulnerability Proof of Concept (PoC) demonstrates that the parameter is a string with only two valid values: or . Simultaneously, there is a validation constraint between the and parameters: the dimension of is related to the value of . ConFL continually generates valid parameters based on operator constraints to probe deeper vulnerabilities in the code. In this example, considering shape, type, and value constraints, the parameter range of is limited. ConFL also generates valid data for and parameters, utilizing the constraints. These methods help avoid wasting time and computational resources on shallow code.
[language=Python]
tensorflow.raw_ops.BoostedTreesCalculateBestFeatureSplit(
node_id_range=[0x400000,0x400001],
stats_summary=[
[[[2.0, 3.0]], [[3., 3.]], [[3., 3.]]],
[[[3., 4.]], [[5., 6.]], [[6., 6.]]]
],
l1=[0.0],
l2=[0.0],
tree_complexity=[1.0],
min_node_weight=[0.7],
logits_dimension = 1,
split_type = 'equality'
)
font=footnotesize
figurePoC for BTCBFS.
Finally, ConFL dicovers this vulnerability located in BTCBFS operator with a boundary value of the parameters, which leads to an out-of-bound access. As shown in the source code below, the parameter takes the value of the input parameter , which may exceed the range of . Then, the pointer of will point to an out-of-control address.
[language=C]
ConstMatrixMap stats_mat( stats_summary(node_id, 0, 0, 0), ...);
const Eigen::VectorXf total_grad =
stats_mat.leftCols(logits_dim).colwise().sum();
font=footnotesize
figureSource code that cause the vulnerability in BTCBFS.
Vulnerabilites in Other ML Frameworks. Despite being in its early prototype stage, we have attempted to extend ConFL to test other ML frameworks, including PyTorch and PaddlePaddle. To date, ConFL has discovered 7 vulnerabilities across these platforms. In PaddlePaddle, ConFL identified a total of 4 vulnerabilities, while in PyTorch, it detected 3 out-of-bound (OOB) vulnerabilities. These results demonstrate that ConFL exhibits strong adaptability to other ML frameworks.
Answer to RQ3: ConFL can extract the constraints between multiple parameters, and generate valid parameters, which is effective for discovering vulnerabilities of ML frameworks in the real world.
§ DISCUSSION
In this section, we present the limitations and possible solutions of improvement in future.
Adaptability to other ML frameworks.
The design of ConFL can be effortlessly adapted to other ML frameworks with minimal modifications. For constraint extraction, ML frameworks like PyTorch and PaddlePaddle employ macros for parameter validation in the source code, similar to TensorFlow. For instance, PyTorch defines operators in native_functions.yaml and Declarations.cwrap, and verifies parameter validity in the source code using TORCH_CHECK. These files can be parsed to extract constraints as well.
Since the reflection mechanism is compatible with all frameworks featuring a Python frontend, ConFL can automatically generate templates for such frameworks. Moreover, ConFL can produce parameters tailored to the specific characteristics of each ML framework, such as custom types. This adaptability allows ConFL to be a versatile solution for various machine learning frameworks.
Optimization of constraint solving. With the extracted constraints, efficient constraint solving techniques can be employed to generate valid test inputs more effectively. This can potentially lead to a higher coverage of the operator code and an increased likelihood of finding deep bugs.
Integration with other fuzzing techniques. The constraint extraction technique can be combined with other fuzzing techniques, such as grammar-based fuzzing or coverage-guided fuzzing, to achieve a more comprehensive and effective fuzz testing process.
Operator Optimization. ML frameworks also focus on the optimization such as operator fusion <cit.> in the practical computation process, which aims to reduce the occupation of memory and improve the efficiency. At present, ConFL tests operators separately and the fused operator should be considered in the future.
File mutation. Some operators require specific file formats as parameters, such as an image file for or an audio file for . To support complex operators with composite types, we plan to add file format mutation in future work.
§ RELATED WORK
§.§ Fuzzing System and Application Interfaces
Some previous work focusing on fuzzing various system and interfaces, including cloud service APIs <cit.> , OS kernel interfaces <cit.>, and native library interfaces <cit.>. For example, NTFuzz <cit.> is a type-aware Windows kernel fuzzing framework, which can automatically infer system call types on Windows on a large scale. APICRAFT <cit.> utilizes static and dynamic information to gather control and data dependencies of API functions. And it employs a multi-objective genetic algorithm to combine the collected dependencies and build a high-quality fuzzy driver.
Although there are similarities between system interfaces and machine learning APIs, previous fuzzing tools are not directly applicable to fuzzing machine learning APIs for two main reasons. First, machine learning APIs utilize domain-specific data types, such as tensors, which necessitate specialized fuzzing techniques. Second, machine learning APIs exhibit unique constraints and interdependencies between parameters, which general system interface fuzzing tools may not effectively handle.
§.§ Fuzzing ML Framework
In recent years, researchers have made major strides in the fuzzing ML frameworks. Q. Xiao et al. <cit.>, X. Tan et al. <cit.> studied the security issues in the third-party dependency libraries of ML frameworks, but did not pay more attention to the security of the source code of ML frameworks in depth.
Xie et al. <cit.> proposed DeepHunter, a general-purpose fuzz testing tool for deep learning frameworks, using scalable coverage criteria and a seed selection strategy. However, their random mutation at the operator level lacks constraint or verification, reducing the legitimacy of samples and automation efficiency. Luo et al. <cit.> proposed operator-level automated testing for deep learning frameworks using the Monte Carlo tree search algorithm and combining model-level and source-level mutation. Wang et al. <cit.> studied the effectiveness of unit test generation techniques for machine learning libraries, finding that most existing libraries lack high-quality unit test suites. The uncovered code is primarily due to insufficient valid parameters for tests, leading them to propose a future direction combining test generation and parameter analysis.
DocTer <cit.> analyzes API documentation to extract input constraints for machine learning API functions. While this approach can provide constraints for some functions, its effectiveness is limited by the completeness and accuracy of the documentation.
FreeFuzz <cit.> fuzzes DL libraries by mining open-source code/models, automatically running them with instrumentation, and using the traced dynamic information for fuzz testing. However, it lacks systematic testing procedures for operators.
DeepRel <cit.> extends FreeFuzz by leveraging function similarity to transfer inputs between test cases. It uses function signatures and documentation to generate valid inputs for some functions, but may be limited when documentation is lacking.
IvySyn <cit.> is a specialized tool for detecting vulnerabilities in DL kernel code. It identifies DL kernel implementations and performs mutation-based fuzzing with type-aware mutations. IvySyn uses developer test suites as initial test cases, sharing similar limitations with FreeFuzz and DeepRel.
Our approach not only focuses on achieving higher code coverage but also ensures that the generated test inputs are valid and conform to the constraints of the target ML frameworks. By automatically extracting input constraints from the source code of operators, ConFL can generate a more comprehensive and accurate set of test inputs. This, in turn, improves the efficiency and effectiveness of the fuzzing process in identifying vulnerabilities.
§ CONCLUSION
In this paper, we introduce ConFL, an innovative tool designed to generate valid operator parameters for uncovering hidden security vulnerabilities in ML frameworks. Initially, ConFL analyzes the source code to collect operators. It then extracts constraints from the operators' source codes. Finally, ConFL automatically constructs operator test templates and generates test inputs guided by the extracted constraints.Through our evaluation, ConFL demonstrates remarkable proficiency in generating valid parameters. Furthermore, our approach has successfully identified 84 vulnerabilities in TensorFlow and 7 in PyTorch and PaddlePaddle.
This work is supported by the National Key R&D Program of China with No.2020AAA0104300.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.03890v1 | 20230708034628 | Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots | [
"Jie Yin",
"Hao Yin",
"Conghui Liang",
"Zhengyou Zhang"
] | cs.RO | [
"cs.RO"
] |
Ground-Challenge: A Multi-sensor SLAM Dataset Focusing on Corner Cases for Ground Robots
Jie Yin ^†, Hao Yin ^†, Conghui Liang^* ^ and Zhengyou Zhang ^ (IEEE Fellow & ACM Fellow)
Authors ^† are independent researchers. Authors ^ are with Tencent Robotics X Lab, Shenzhen, China.
^* Corresponding Author: Conghui Liang ([email protected])
August 12, 2023
========================================================================================================================================================================================================================================================================
High-quality datasets can speed up breakthroughs and reveal potential developing directions in SLAM research.
To support the research on corner cases of visual SLAM systems,
this paper presents Ground-Challenge: a challenging dataset comprising 36 trajectories with diverse corner cases such as aggressive motion, severe occlusion, changing illumination, few textures, pure rotation, motion blur, wheel suspension, etc. The dataset was
collected by a ground robot with multiple sensors including an RGB-D camera, an inertial measurement unit (IMU), a wheel odometer and a 3D LiDAR. All of these sensors were well-calibrated and synchronized, and their data were recorded simultaneously.
To evaluate the performance of cutting-edge SLAM systems, we tested them on our dataset and demonstrated that these systems are prone to drift and fail on specific sequences.
We will release the full dataset and relevant materials upon paper publication to benefit the research community. For more information, visit our project website at https://github.com/sjtuyinjie/Ground-Challengehttps://github.com/sjtuyinjie/Ground-Challenge.
Data Sets for SLAM, Data Sets for Robotic Vision
§ INTRODUCTION
Intelligent ground robots have been widely used in industrial production and daily life, such as logistics, cleaning, warehouses, security, and food delivery. And navigation is the fundamental capability for these robots to execute these diverse tasks. To achieve reliable navigation, visual SLAM (Simultaneous Localization and Mapping) problem has been researched for decades, with quite a few classical methods proposed <cit.>.
A recent developing trend in visual SLAM is low-cost multi-sensor fusion, which has been verified to be a practical approach <cit.>
to enhance the robustness to diverse scenarios. Different sensors can complement each other, maximizing the perceptual awareness of environments. One of the best example is that visual-inertial odometry (VIO) algorithms can significantly improve the tracking stability and accuracy in aggressive motion and textureless scenarios.
While VIO systems have performed well in most cases, <cit.> has proven that this does not apply to ground vehicles.
For generic movement patterns, a VIO system has only four unobservable directions (three for global translation and one for global yaw). However, ground vehicles are restricted from moving in a 2D plane, mostly along a straight line or a circular arc, and thus the IMU is not sufficiently activated.
Therefore, the VIO system on the ground robot will suffer from additional DoF unobservability, such as the scale. To address this issue, <cit.> extends VINS-Mono <cit.> to
incorporate low-frequency wheel-encoder data and keep the scale observable. Similarly, <cit.> proposes a RGB-D Encoder SLAM system for differential-drive robots. Most recently, <cit.> proposes an optimization-based visual-inertial-wheel tightly coupled odometry, which claims to work robustly in dark or overexposed conditions. Nonetheless, its performance has not been tested on any public dataset with ground truth trajectories.
We believe that progress in SLAM, like in the AI field, is highly data-driven <cit.>.
Although there have been extensive public datasets available to evaluate different SLAM algorithms, most of these datasets are outdated and do not challenge cutting-edge SLAM algorithms. In our opinion, those datasets focusing on challenging cases can more efficiently reveal the defects and limitations of existing algorithms. We notice that corner case detection in autonomous driving receive extensive concern from researchers <cit.> <cit.> because such cases could easily cause the navigation system to drift. Similarly, once the localization module of the robot fails, it might cause industrial accidents and even pose potential threats to human safety as well. Nonetheless, to our knowledge, there is currently not much literature discussing the corner cases of robot navigation, which is not conducive to the safety of real-world robot applications.
To fill this gap, we present a novel SLAM dataset for ground robots, which aims to challenge existing cutting-edge SLAM systems with corner cases and thus promotes the progress of the multi-sensor fusion SLAM algorithm.
The challenges of our datasets lie in two areas: specific movement patterns and sensor failures, which will be elaborated in subsequent sections. Some scenarios covered in our datasets are visualized in Figure <ref>. Our major contributions are summarized as follows:
* We collect a novel visual SLAM dataset for ground robots with a rich pool of sensors in diverse environments both indoors and outdoors. Particularly, the dataset covers a series of challenging sequences including sensor failures and specific movement patterns.
* State-of-the-art SLAM algorithms of different settings are tested on our benchmark. And the results indicate these systems are not robust enough for situations such as sensor failures.
* To facilitate the research on corner cases of robot navigation, we will release the full dataset with ground truth trajectories and the configuration file of each tested algorithm upon paper publication.
§ RELATED WORKS
§.§ SLAM Datasets for Ground Robots
Most existing SLAM datasets are collected by UAVs <cit.> or cars <cit.>, but only a few are targeted at ground robots. For instance, Rawseeds <cit.> and UTIAS<cit.> provide RGB images only, thus making them unsuitable for evaluating multi-sensor fusion systems. The Rosario dataset <cit.> is rich in sensor variety, yet is specifically designed for agricultural environments. M2DGR <cit.> captures diverse indoor and outdoor scenarios, including some challenging scenes like elevators and darkrooms, but doesn't contain wheel odometer information which is essential for multi-sensor fusion SLAM algorithms due to its low cost and high precision. OpenLORIS<cit.> offers rich sensor types in visual challenging scenarios such as highly dynamic markets and poorly exposed corridors, but wheel challenges or motion challenges are not included.
§.§ Corner Cases
Corner cases, i.e., extreme and non-predictable situations, are a popular research topic in autonomous driving <cit.>. Although infrequent, these cases can potentially threaten the security and reliability of autonomous navigation systems. Corner cases exist in robot navigation tasks as well. To address such challenging scenarios, researchers have proposed various methods, such as RGB-D SLAM <cit.> and DS-SLAM <cit.>, to handle dynamic environments, and GVINS <cit.> to deal with degenerate cases including low-speed movement, less than four visible satellites, and GNSS-denial environments. Additionally, <cit.> proves that their method is robust in aggressive motions and a visual texture-less white wall. Nonetheless, we note that there are still plenty of corner cases that tend to be overlooked, such as wheel slippage, motion blur, and complete visual occlusion. There is a lack of SLAM datasets specifically designed for studying these corner cases, which is a gap yet to be filled. To sum up, it is urgent and critical to collect a novel SLAM dataset with rich sensor types, precise calibration, and sufficient challenge to support studies on corner cases, particularly sensor failures.
§ THE GROUND-CHALLENGE DATASET
§.§ Sensor setup
We construct a ground robot for data collection and the sensor locations on the robot are shown in Figure <ref>. The chassis is equipped with a front-view VI-Sensor (Visual-Inertial Sensor) that captures RGB and depth images along with 6-axis IMU's measurements. Driven by two driving wheels providing odometer information and four assisting wheels, the robot also has a high-precision 9-axis Xsens IMU and a 16-beam 3D LiDAR.
The ground truth trajectories and point clouds are generated by the Velodyne LiDAR and the Xsens IMU using Fast-LIO2 <cit.>, a state-of-the-art LiDAR-based SLAM system. To evaluate its performance, we compared the high-precision trajectories generated by a motion capture system with 16 infrared cameras to those generated by Fast-Lio2. The experiment revealed that Fast-LIO2 can reach a positioning accuracy of 3cm in a small-scale (15m x 15m) indoor room. Additionally, as reported in <cit.>, Fast-LIO2 can achieve less than 0.1m end-to-end error in an outdoor trajectory spanning 1000 meters. Thus, considering that it is difficult for visually-based SLAM algorithms to achieve similar accuracy in challenging scenarios, we use the result of Fast-LIO2 as the pseudo-ground-truth trajectory.
§.§ Synchronization and Calibration
We capture all the data using the ROSbag tool in the Robot Operating System (ROS). The RGB camera and 6-axis IMU embedded in the Realsense D435I are hard-synchronized, while the depth images are pixel-by-pixel aligned to the RGB images. The 3D LiDAR and 9-axis IMU are software-synchronized by triggering data capture at the same instance. To calculate the camera intrinsics of pinhole cameras, we use the MATLAB Camera Calibration Toolbox. To calibrate the internal parameters of the IMU, we use the toolbox from <cit.>, which includes the white noise and random walk of both the gyroscopic and accelerometer measurements. We choose the IMU frame as the reference to calibrate the extrinsic parameters (relative poses) between sensors, and employ the toolbox from <cit.> for calibrating the extrinsic parameters between cameras and IMU.
§.§ Data collection
We provide an overview of our dataset in Table <ref>. All data was captured using the Rosbag tool within the Robot Operating System (ROS). The recording process is as follows: First, we recorded Office and Room sequences, where the robot moves slowly in a well-lit and textured office or room respectively, to test the performance of different algorithms in normal situations. Subsequently, we designed a series of corner case experiments from three aspects: visual challenge, wheel odometer challenge, and particular movement pattern, which are presented as follows:
§.§.§ Visual Challenge
In our experiments, we manipulate the robot to move in a room with poor illumination (Darkroom sequences), back and forth in front of walls lacking texture (Wall sequences), and through scenarios of varying degrees of occlusion (Occlusion sequences). Figure <ref> (a) shows sequences Occlusion1∼2, which involves a person walking in front of the robot and causing intermittent partial occlusion. Figure <ref> (b) displays sequence Occlusion3, in which the camera is covered with the palm repeatedly. In sequence Occlusion4 (Figure <ref> (c)), a piece of black tape is attached to the camera's lens to completely block its view, disabling feature extraction and matching for visual SLAM. Furthermore, Motionblur sequences are generated by rapidly translating and rotating the robot, creating motion blur for cameras (Figure <ref> (d)).
§.§.§ Wheel Odometer Challenge
The Hall and Loop sequences are collected in a hall with smooth ground and a heavily carpeted aisle loop, respectively, where the wheels slip significantly. Moreover, we record Roughroad sequences to test the performance of the localization algorithm on rough roads.
§.§.§ Particular Moving Patterns
In the Sequences Corridor1 and Corridor2, the robot moves forward in a zigzag shape and straight forward, respectively. In the zigzag route, motion blur and less overlapping between adjacent image frames will lead to errors in feature matching.
In the Rotation sequence, the robot only rotates and hardly translates, which makes it difficult for vision-based algorithms to estimate the depth of feature points by triangulation. In the Static sequences, the robot stands still on a bracket, and we control its wheels to move in different directions through the handle. This experiment aims to test whether SLAM systems coupled with the wheel odometer can work well when the robot wheel is suspended.
Finally, we operate the robot from a flat surface to another, passing through a slope. In this experiment, since the wheel odometer only provides two-dimensional speed observations, it could be misleading to estimate three-dimensional trajectories.
§ EVALUATION
The features of all the sequences are described on our project website. We evaluated some SLAM systems with different sensor configurations on twelve representative sequences from our dataset. The tested algorithms are ORB-SLAM3 <cit.>, an optimization-based SLAM system; VINS-Mono <cit.>, one of the state-of-the-art monocular visual-inertial systems; VINS-RGBD <cit.>, a fusion algorithm of RGB-D and IMU information based on the VINS-Mono <cit.> framework; and VIW-Fusion <cit.>, a tightly-coupled visual-inertial-wheel system featuring online extrinsic calibration and wheel-aided initialization. Also, we use an EKF algorithm <cit.> for fusion of IMU and wheel odometer.
The EVO tool <cit.> was used to align all the estimated trajectories with ground truth trajectories to obtain the ATE RMSE <cit.>.
The quantitative results are shown in Table <ref>, with the estimated trajectories in 2D plotted in Figure <ref>. Since most of the selected sequences are highly challenging (even with sharp turns), ORB-SLAM3 (both monocular-inertial and RGBD-inertial version) performed poorly on most of our test sequences, with frequent tracking failures (less than 50% of successfully tracked frames), initialization failure, or scale drift.
In contrast, SLAM algorithms with multi-sensor fusion (like VIW-Fusion <cit.>) achieved better localization results but failed in some specific scenarios as well. We discuss the experiment results in detail as follows:
Normal Situation
The ATE RMSE results on Sequence Office3 indicate that existing localization methods can perform well when the motion mode matches the assumptions of these algorithms and all the sensors work well.
Vision Challenge
In Sequence Darkroom2 and Motionblur3, VINS-Mono <cit.> and VINS-RGBD <cit.> drift a lot due to visual failures, while Wheel odometer based algorithms work more robustly in this case.
In Sequence Occlusion4, all the vision-based methods including VIW-Fusion <cit.> fail to initialize because of poor feature extraction. This finding indicates that VIW-Fusion <cit.> has not been adequately designed to handle adverse conditions. A more prudent strategy may be to combine the wheel odometer and IMU to output a trajectory when a visual sensor failure is detected.
Wheel Odometer Challenge
In the sequences Roughroad3 and Slope1, vision-based systems perform worse than wheel odometer-based algorithms due to inaccurate scale estimation in aggressive motion. In Sequence Hall1, VINS-Mono <cit.> and VINS-RGBD <cit.> drift significantly due to ground reflection and faraway feature points. Here, VIW-Fusion <cit.> maintains satisfactory positioning performance even with slight wheel slippage, demonstrating the advantages and necessity of multi-sensor fusion in complex scenarios. However, when the wheels slip more severely in Sequence Loop2, the significant deviation caused by the wheel odometer increases the localization error of estimated trajectories. This can be attributed to two main reasons: current algorithms lack the ability to detect wheel slippage, and the angular velocity provided by the wheel speedometer is not accurate, leading to the long-term divergence of the estimated trajectory. To reduce the accumulation of errors, it is suggested that IMU's angular velocity measurement be used instead of the wheel odometer's.
Particular Movement Patterns
In Sequence Corridor1, the zigzag movement of the robot not only fails the feature extraction but also leads to severe wheel slippage. Therefore, all the tested algorithms cannot accurately estimate the trajectory. In Sequence Rotation1, pure rotation causes severe errors in depth estimation by VINS-Mono's triangulation, while the remaining tested systems perform well thanks to measurements from other sensors. Finally, in Sequence Static1, VIO systems cannot be initialized successfully due to the lack of IMU excitation. Since the wheels are still moving after suspension, the wheel odometer-based methods mistake the robot being in motion.
In summary, VINS-Mono <cit.> is most likely to generate catastrophic localization results in corner cases, and VINS-RGBD <cit.> can also inevitably fail when severe camera failures occur.
We have noticed that the wheel odometer alone can achieve good results in most situations, except for severe wheel slippage. Integrating the IMU and the wheel odometer through the EKF <cit.> can achieve higher accuracy than the raw odometer. Nonetheless, the trajectory of the EKF can shake violently in the initialization phase due to the inaccuracy in the initial covariance estimation (this part was manually eliminated in our experiment). VIW-Fusion <cit.> can achieve satisfying accuracy and robustness in most sequences, but its initialization in visual failure needs improvement. Furthermore, it lacks consideration for wheel slippage, and its adopted dead reckoning model will diverge in a long trajectory due to inaccurate angular velocity estimates.
The experiments conducted demonstrate the validity and value of our dataset as a benchmark for existing SLAM systems. The results further suggest that there is still much room for improvement in current cutting-edge multi-sensor fusion algorithms for real-world applications. Sensor failures, such as complete occlusion and wheel suspension, can be fatal for single-sensor-based methods; however, multi-sensor fusion systems should be designed to be more robust in these cases. For instance, we posit that a reliable visual-IMU-wheel system should be able to explicitly identify scenarios where visual observations are inaccurate and respond accordingly (e.g. disable visual information and rely only on wheel odometer and IMU). Nevertheless, to our knowledge, corner case identification and troubleshooting have been scarcely addressed in prior work. Therefore, we provide this dataset to support relevant researches.
§ CONCLUSION
We present Ground-Challenge, a novel ground robot dataset to encourage breakthroughs in multi-sensor fusion SLAM algorithms. Specifically, we have crafted a series of corner case experiments, including sensor failures in diverse environments, to challenge current cutting-edge SLAM systems. We have tested these systems on our dataset and analyzed their limitations in various scenarios, thus providing potential developing directions for SLAM. We are committed to continually updating our benchmark dataset. Specifically, we will mount 2D and 3D LiDAR on the robot, design experiments to invoke corner cases, and utilize higher-precision equipment such as motion capture systems to ensure accurate ground truth for LiDAR SLAM in our future work.
Acknowledgement
Thank Tencent Robotics X Lab for support to this work.
IEEEtran
|
http://arxiv.org/abs/2307.04808v1 | 20230710180113 | Autonomous feedback stabilization of a cavity-coupled spin oscillator | [
"Julian Wolf",
"Olive H. Eilbott",
"Josh A. Isaacs",
"Kevin P. Mours",
"Dan M. Stamper-Kurn"
] | physics.atom-ph | [
"physics.atom-ph",
"quant-ph"
] |
[email protected]
[Present address: ]Eikon Therapeutics, Hayward, CA 94545, USA
Department of Physics, University of California, Berkeley, California 94720, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, California 94720, USA
[Present address: ]Max-Planck-Institut für Quantenoptik, Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), 80799 Munich, Germany
Department of Physics, University of California, Berkeley, California 94720, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, California 94720, USA
Technische Universität Kaiserslautern, 67663 Kaiserslautern, Germany
Department of Physics, University of California, Berkeley, California 94720, USA
Challenge Institute for Quantum Computation, University of California, Berkeley, California 94720, USA
Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
We report out-of-equilibrium stabilization of the collective spin of an atomic ensemble through autonomous feedback by an optical cavity.
For a magnetic field applied at an angle to the cavity axis, dispersive coupling to the cavity provides sensitivity to a combination of the longitudinal and transverse spin.
Coherent backaction from this measurement, conditioned by the optical cavity susceptibility, stabilizes the collective spin state at an arbitrary energy.
The set point tracking and closed-loop gain spectrum of the feedback system are characterized and found to agree closely with analytic predictions.
Autonomous feedback stabilization of a cavity-coupled spin oscillator
Dan M. Stamper-Kurn
August 12, 2023
=====================================================================
As in the case of classical systems, the state and evolution of quantum systems can be tailored by feedback control <cit.>.
At a scientific level, the development of a quantum control theory, one that integrates entanglement and non-classical effects of dissipation and measurement, opens a new line of inquiry into non-equilibrium and open quantum systems.
At an applied level, feedback control allows quantum devices to operate robustly, mitigating errors in system preparation and calibration as well as decoherence.
Feedback control underpins important tasks such as error correction in quantum computation <cit.> and sensing <cit.>, entanglement purification <cit.>, and adaptive measurement <cit.>.
Quantum feedback control can be divided broadly into the two categories of measurement-based and autonomous feedback.
In the measurement-based approach, properties of a quantum system are read out on a classical sensor, the measurement record of which is used by an extrinsic classical control device to alter the ensuing coherence, dissipation, and measurement operations on the system <cit.>.
The feedback system's design must account for noise and backaction that are intrinsic to quantum measurement.
By comparison, in autonomous (or coherent) quantum feedback the corrective response that steers a quantum subsystem is built into the quantum system itself.
Control is achieved by structuring the drive and dissipation of an open quantum system so that entropy is reliably extracted as the quantum system is steered to the desired final state.
Examples of such schemes include autonomous error correction in bosonic code spaces <cit.>, autonomous generation of entanglement between quantum bits <cit.>, quantum state preparation <cit.>, optical noise cancellation <cit.>, and generating spin squeezing of atoms in a driven optical resonator <cit.>.
In this work, we develop a coherent feedback scheme to stabilize the energy of an ensemble of quantum spins.
Our scheme employs optical backaction in a driven cavity to realize closed-loop autonomous feedback.
Under negative-feedback conditions, we observe that cavity spin optodynamics <cit.> deterministically steer the collective spin toward a steady-state energy that is set by the frequency of the driving optical field.
By examining both the light that drives the system and the atomic spins that respond to this drive, we quantify the tuning sensitivity as well as the closed-loop gain spectrum of the autonomous feedback system and find close agreement with a theoretical model.
The feedback system comprises the collective spin of an ultracold atomic gas interacting with an optical cavity mode.
In particular, an ensemble of ≈ 1400 non-degenerate atoms, cooled to around 3, is trapped predominantly in a single antinode of a standing-wave optical dipole trap (ODT, wavelength 842) resonant with a TEM00 mode of an optical Fabry–Pérot cavity <cit.>.
The atoms are initially prepared in the |f = 2,m_f = 2⟩ electronic ground state.
The atomic ensemble interacts strongly with a second TEM00 cavity mode (the “pump” mode) whose frequency is detuned slightly (≡ - = 2 π×-35) from the atomic transition (frequency , wavelength 780).
The half-linewidth of the cavity at the wavelength is κ / 2 π = 1.82.
We minimize the effects of the spatial dependence of the atom–cavity coupling by trapping atoms at a location where the trapping field and pump field antinodes coincide (schematica) <cit.>.
The symmetric coupling of the atoms to the cavity field allows the ensemble to be addressed in terms of a total (dimensionless) spin F = 2 ≈ 2800 and a mean dispersive cavity–atom coupling = g_0^2 /, where g_0 = 2 π×13 is the vacuum Rabi coupling of a single atom to the pump mode, averaged over the atom's motion in the ODT <cit.>.
For a cavity pumped with circularly polarized (, or σ_- relative to the cavity axis) light, the dynamics of the system are governed by the Hamiltonian <cit.> (Appendix A)
Ĥ =
- ħĉ^†ĉ
+ ħ
+ ħ[ α_0 - α_1 ] ĉ^†ĉ,
written in a frame rotating at the pump frequency .
Here, ≡ - is the pump–cavity detuning, ĉ is the cavity pump mode annihilation operator, and = g_F |B⃗| / ħ is the Larmor frequency (where g_F is the Landé g-factor, |B⃗| the applied magnetic field strength, and is the Bohr magneton).
The constants α_0 = 2/3 and α_1 = 1/6 describe the scalar and vector interactions, respectively, between the atoms and the cavity pump field (Appendix A).
We have treated atom–cavity interactions as purely dispersive, accounting for || being large compared to the atomic half-linewidth (γ = 2 π×3) and to the collective atom–cavity coupling strength √(N) g_0.
The collective atom–cavity interaction is the sum of two terms.
A scalar (spin-independent) dispersive atom–light interaction shifts the cavity resonance frequency proportional to atom number .
With being constant during the few- duration of the spin feedback experiments, it is useful to absorb this static frequency shift into an effective constant pump–cavity detuning ≡ - α_0.
In addition, a vector (spin-dependent) atom–cavity interaction shifts the resonance frequency of the σ_- cavity mode by an amount proportional to the projection of the collective spin onto the cavity axis (Appendix A).
We now outline how this quantum system autonomously includes the essential elements of a feedback control system.
In such a control system, a control variable, which represents the state of the plant (the subsystem to be controlled), is measured by a sensor.
A comparator generates an error signal as the difference of the sensor output and an externally determined set point.
A controller conditions the error signal and acts on the plant.
Under proper negative-feedback conditions, the control variable is stabilized unconditionally.
Identifying = sin + cos, with being the angle between the applied magnetic field and the cavity axis k̂ (schematica), we observe that the cavity is sensitive both to the longitudinal spin (equivalently, the bare spin energy) and the transverse spin .
The longitudinal spin plays the role of the control variable, and the cavity shift proportional to acts as a coherent sensor.
Accounting for this shift, the net detuning of the pump light from the cavity, averaged over the fast effects of Larmor precession, is given by ≡ + cos, with ≡α_1.
The system Hamiltonian hamiltonian-general can now be rewritten as
Ĥ =
- ħĉ^†ĉ
+ ħ
- ħĉ^†ĉ sin.
The net detuning represents the control system error signal, proportional to the difference between the instantaneous value of and the externally determined longitudinal spin set point
= -/cos.
The final term in hamiltonian-feedback completes the autonomous control system, serving as the feedback actuator.
As described in Refs. <cit.>, the Larmor precessing transverse spin modulates the cavity field intensity through the spin-dependent dispersive interaction.
In turn, this modulation, conditioned by cavity dynamics, acts resonantly on the precessing spin and alters its energy.
As in the case of cavity optomechanics, the resulting energy dynamics of the spin ensemble can be described in terms of cavity-induced sideband asymmetry.
Modulation of the cavity resonance by the precessing spin shifts optical power from the pump light into first-order frequency sidebands with optical frequencies ±.
While a free-space modulator would generate sidebands with equal power, here, the cavity spectrum induces a sideband asymmetry (schematicb).
For a pump blue-detuned from cavity resonance (> 0), the cavity induces stronger emission on the Stokes (red) sideband, reducing the net energy of the optical pump and, in turn, increasing the energy of the spin ensemble.
Similarly, a pump red-detuned from cavity resonance (< 0) reduces the energy of the spin ensemble.
With the correct sign of cos in Fz-final, the response of the spin ensemble in either case brings closer to zero.
The system arrives at a stable steady state, with ⟨⟩ =, that is determined by the pump frequency (through ) and that, notably, is independent of the initial state of the collective spin and insensitive to many perturbations.
We first confirm experimentally that the spin ensemble is autonomously stabilized to a state determined by the external set point.
To this end, the collective spin is initiated to ⟨(t = 0) ⟩ = 0 using a coherent rf π/2-pulse at drive frequency , such that (t = 0) =.
The cavity is then pumped with light at a constant and allowed to evolve.
The light emitted by the cavity is measured on a balanced heterodyne detector <cit.> (total detection efficiency = 2.2, schematica), allowing the power in the Stokes and anti-Stokes sidebands to be detected as independent time traces (dc-pullinga).
The difference in the power of the two sidebands directly measures the instantaneous energy transfer from the pump light to the collective spin.
The cumulative sum of this difference measures the total energy δ E(t) added to the collective spin, leading up to time t.
As shown in dc-pullinga and b, an initial < 0 leads to an enhancement of the anti-Stokes sideband and a net energy transfer δ E < 0, driving the spin to a low energy state, while > 0 has the opposite effect.
The spin state achieved after long evolution times under autonomous feedback, at a given , is shown in dc-pullingc.
Here, we measure the longitudinal spin by terminating the feedback, reorienting the magnetic field along the cavity axis, and measuring the spin-dependent cavity shift (Appendix B).
For each , the measured response shows a tuning range, centered about = 0, within which the steady-state spin energy varies linearly with .
The sideband-based energy transfer measurements show the same trend as the spin measurements, through the relation ħ⟨⟩(t) = δ E(t), but are found to be less precise.
By fitting the response curves to an analytical model (discussed below and in Appendix C), we determine the linear sensitivity of the steady-state spin to near = 0.
This linear sensitivity (dc-pullingd) matches well to the prediction of the set point equation (Fz-final) for a range of field angles.
Outside the linear tuning range || > |2 cos|, one would expect the feedback system to rail, driving the spin ensemble to one of its extremal energy states.
Such a saturated response is observed for ≳55.
Here, the cavity field modulation amplitude, proportional to sin, is large, driving the spin quickly to its steady state.
In contrast, for shallower angles (≲55), the collective spin undergoes dephasing during feedback, reducing its total magnitude to || F⃗ || < 2 before the system can reach its steady state.
Next, we investigate the dynamical response of the autonomous feedback system.
Considering the Hamiltonian of hamiltonian-feedback and adding terms accounting for pumping into and (non-Hermitian) leakage out of the cavity mode, the cavity field evolves according to
/ tĉ
= ( + ) ĉ
- κĉ
+ κη,
where η is the coherent-state amplitude of the field pumping the cavity.
The energy of the collective spin, meanwhile, evolves according to
/ t =
ĉ^†ĉ sin.
For ≫, as in our experiment, the optodynamical Larmor frequency shift (analogue of the optomechanical spring shift) <cit.> is small and the transverse spin can be approximated as = sin t; in practice, this relies on terms with nontrivial commutation relations only entering in at a higher order than is being considered, and amounts to treating as equal to its expectation value.
We expect a cavity field comprising a carrier at frequency and sidebands at frequencies ±: ĉ = ĉ_0 + ĉ_+ e^- t + ĉ_- e^ t.
In the limit of small modulation depth ( / 2 κ) F_⊥ |sin| ≪ 1, the amplitudes and phases of the sidebands ĉ_± can be calculated directly from cavity-field.
Inserting this solution for the cavity field into F-pulling, we find that the cavity resonance frequency is pulled toward its set point at a damping rate given as
≡1// t
= -2 ^2 sin^2 cos^3/κ^3.
Here, ≡ κ^2 / (κ^2 + ^2) with = |η|^2; this can be measured directly from the spectrum of the cavity output.
This simple model allows for straightforward simulation of how the system will act under a variety of conditions.
We probe the dynamics of our feedback system in two experiments.
First, we characterize the system's closed-loop transfer function by pumping the cavity with a time-varying tone (t) = sin t and measuring the response δ E(t) (ac-responsea).
At each modulation frequency, the closed-loop gain is calculated as the ratio between the response and the perturbation:
[] =
2 /ħ T∫_0^T t δ E(t) exp(- t),
where T = 2 π s /, for integer s (ac-responseb, black circles).
For a pure integrator system such as ours with damping rate , we expect a closed-loop gain of
[ω] =
( / ω)/1 + ( / ω),
which should describe the system well for ≪κ, 2 cos (ac-responseb, gray line).
Our measurements match this expectation qualitatively, but the data quality is limited by a signal-to-noise ratio of approximately 1 (dominated by shot noise on the detection of the sideband photons, which is exacerbated by the low ) as well as by saturation at the large set point modulation depth used for this measurement.
Second, and more quantitatively, we characterize the impulse response function of the feedback system.
Here, we initialize the collective spin near ⟨f̂_z ⟩ = +1, and then suddenly impose feedback with a set point of / = -1 (ac-responsec).
Time-resolved direct spin measurements track the system evolution toward the set point (Appendix B).
For regions over which is approximately constant (namely, | ⟨f̂_z ⟩ | ≤ 1), damping-rate states that ⟨⟩ should approach exponentially.
This allows the damping rate of the system to be directly measured, giving a value of = 450 ± 60 (ac-responsec).
For the same experimental parameters (= 60, = 2 π×300, = 2.4, = 1100), damping-rate predicts = 1600.
This disagreement is due, in part, to the large modulation depth used for this experiment: here, ( / 2 κ) sin = 0.4, which warrants the inclusion of higher-order terms.
Accounting for these corrections reduces the expected gain to = 790 (Appendix C).
The analytical model still does not account for the dephasing of the spin ensemble.
Constructing an accurate model for dephasing in our system is not straightforward, but any form of dephasing will have the effect of decreasing , and thus , which may explain the remaining discrepancy.
In this work, we have shown that autonomous feedback generated by optical backaction of a driven cavity onto a spin ensemble stabilizes the ensemble energy at an energy determined by the cavity pump frequency.
The optical cavity emission provides a real-time record of the feedback dynamics.
In future work, information from this real-time optical signal may also be used to enhance the feedback stabilization through additional measurement-based feedback <cit.>.
Our system can equivalently be described as autonomous feedback stabilization of the optical cavity's resonance frequency.
From this viewpoint, the control variable is .
The spin ensemble now plays the part of the controller by which the cavity is autonomously tuned to be in resonance with the light with which it is driven.
Our feedback setup stabilizes the spin ensemble to a specific value of the longitudinal spin, but does not control the phase at which this spin undergoes Larmor precession because of the time translation symmetry of our scheme.
In future work, it will be interesting to investigate methods for fuller control of the quantum spin state, e.g., applying phase coherent modulation at the Larmor frequency, either to the optical pump field or to an applied magnetic field, so as to stabilize the Larmor precession phase.
Another target for future investigation is the fluctuation of the spin ensemble under steady-state feedback.
In steady state, the ensemble should respond to the quantum noise of the cavity field, generating fluctuations in the longitudinal spin as well as the Larmor precession phase.
At the same time, coherent feedback suppresses longitudinal spin fluctuations.
The balance between quantum-optical fluctuations and coherent dissipation, achieved in the steady state and away from thermal equilibrium, may be revealed in the spectrum of the cavity output.
However, in our current setup, technical noise on , the pump light spectrum, and optical detectors obscures this quantum noise signature.
We acknowledge support from the National Science Foundation Quantum Leap Challenge Institutes program (Grant No. OMA-2016245), from the National Science Foundation (Grant No. PHY-1707756), from the Air Force Office of Scientific Research (Grant No. FA9550-19-1-0328), and from Army Research Office through the Multidisciplinary University Research Initiative program (Grant No. W911NF-20-1-0136).
The contributions of J. A. I. are funded by the Heising-Simons Foundation (Grant No. 2020-2479).
The contributions of O. H. E. are supported by the National Science Foundation Graduate Research Fellowship Program (Grant No. DGE-175281).
unsrt
§ A. DERIVATION OF THE AUTONOMOUS SPIN STABILIZATION HAMILTONIAN
The Hamiltonian for the autonomous spin stabilization system can be written, generically, as a sum of cavity, spin, and interaction terms: Ĥ = + +.
In the lab frame, the cavity Hamiltonian is simply given by ^lab = ħĉ^†ĉ.
We find it helpful to move to a frame rotating at the frequency of the cavity pump laser, such that
= -ħĉ^†ĉ
= -ħn̂,
where ≡ - is the pump–cavity detuning.
Here, the cavity annihilation operator ĉ and the photon occupation operator n̂ include light of both right- and left-handed circular polarizations.
Although the left- and right-handed cavity modes interact differently with the atomic ensemble, their bare energies are approximately degenerate, and here they can be considered together as n̂ = n̂_+ + n̂_-.
For an ensemble of noninteracting atoms indexed i at positions r⃗_i and spin projection f_z^(i) along the direction of the magnetic field, the spin Hamiltonian is given by = ∑_i ħ(r⃗_i) f̂_z^(i).
Here, the local spin precession frequency is given by ħ(r⃗) = g_F |B⃗(r⃗)|, where g_F is the Landé g-factor and is the Bohr magneton.
For a localized ensemble of atoms, the magnetic field is approximately constant, such that this can be rewritten in terms of an average spin precession frequency and a total spin projection = ∑_i f̂_z^(i):
= ħ.
Generically, the interaction between the cavity and atom i is described by
^(i)
= ħ∑_g,e
g_g;e^+(r⃗_i) ĉ^†_+ σ̂_e;g^(i) δ_m+1, m'
+
g_g;e^-(r⃗_i) ĉ^†_- σ̂_e;g^(i) δ_m-1, m' +
h.c.
Here, the summation runs over all possible transitions from the ground-state
manifold g ≡ | f = 2, m ⟩ to the excited states e ≡ | f' = 3, m' ⟩, with polarization-dependent coupling strengths g_g;e^±.
When the cavity–atom detuning is large compared to the hyperfine splittings _f' in the excited (f' = 3) manifold that is being addressed, the excited states can be eliminated.
This approximation results in a spin-dependent dispersive interaction Hamiltonian, describing dynamics within the ground-state manifold:
^(i)
= ħ |U(r⃗_i)|^2 {α_0 ( n̂_+ + n̂_- ) +
α_1 ( n̂_+ - n̂_- ) f̂_k^(i)
+
α_2 [
( n̂_+ + n̂_- ) ( f̂_k^(i))^2 -
ĉ_- ĉ_+ ( f̂_+^(i))^2 -
ĉ_+ ĉ_- ( f̂_-^(i))^2
]
},
where |U(r⃗_i)|^2 is the local relative intensity of the cavity pump mode, where ĉ_± are the annihilation operators for left- and right-handed cavity modes, which are approximately degenerate in our system, and where f̂_k and f̂_± are the spin operators relative to a quantization axis along the cavity axis k̂.
Here, the scalar, vector, and tensor interactions between the spin and the cavity field are described by coupling coefficients (α_0, α_1, α_2) → (2/3, 1/6, 0) in the limit of large (coupling-coefficients).
In our system, the atomic ensemble is primarily localized within a single antinode of the cavity pump field, which allows the local cavity field U(r⃗_i) to be treated as approximately constant.
This leaves
= ħ{α_0 n̂ +
α_1 ( n̂_+ - n̂_- ) },
such that the total system Hamiltonian, in the limit || ≫ |_f'|, is given by
Ĥ
= -ħn̂ +
ħ
+
ħ{α_0 n̂ +
α_1 ( n̂_+ - n̂_- ) }.
When the cavity is pumped with only right-handed (σ_-) light, this reduces to hamiltonian-general.
When the cavity is pumped with only left-handed light, the sign of the cavity–spin interaction flips, and with it the sign of the gain of the feedback system.
§ B. NONDESTRUCTIVE MEASUREMENT OF THE COLLECTIVE ATOMIC SPIN STATE
When the externally applied magnetic field is parallel to the cavity axis (= 0), the system Hamiltonian (hamiltonian-general), corresponding to pumping the cavity with right-handed (σ_-) light commutes with the total spin energy since this becomes equivalent to the projection of the spins along the cavity axis:
Ĥ^- =
- ħ( - 2/3 + 1/6) ĉ^†ĉ
+ ħ,
where the superscript on Ĥ^- indicates that this Hamiltonian only considers the right-handed cavity mode.
If the dispersive shift to the cavity resonance condition is measured by comparing the resonance frequencies with and without the presence of atoms, it will be given by
^-
= - 2/3 + 1/6⟨⟩,
where the superscript on ^- indicates that this is the dispersive shift to the right-handed cavity mode.
If the atom number were known exactly, this measurement would be sufficient to determine the collective spin energy of the ensemble; however, variable atom loss between state preparation and readout make this impractical.
By pumping the cavity with left-handed light, different information can be acquired.
Considering the case of an external field parallel to the cavity axis, the Hamiltonian can be derived which describes the left-handed (σ_+) cavity mode:
Ĥ^+ =
- ħ( - 2/3 - 1/6) ĉ^†ĉ
+ ħ,
which corresponds to a dispersive shift
^+
= - 2/3 - 1/6⟨⟩.
Using a pair of liquid crystal variable retarders (LCVRs) at the input and the output of the cavity, the polarization of the light pumping the cavity can be switched rapidly between left- and right-handed without otherwise affecting the detection chain.
By measuring the resonance frequencies of each of the polarizations with the atomic ensemble present in the cavity, and then repeating both measurements with the atoms absent, the total atom number and collective spin can be recovered (final-sweeps):
= -3/4^+ + ^-/;
⟨⟩ = -3 ^+ - ^-/.
The same effect can be achieved by changing the orientation of the magnetic field to = 180 between the first and second measurements of , such that = -, but this takes too long to be practical due to the self-inductance of the coils used to generate the field.
The effect can also be achieved by using an external rf field to drive a π-pulse on the collective spin, taking → - between measurements; this approach has been successfully used in the past, but its dependence on the calibration of the rf drive makes it less appealing than switching the polarization of the pump light.
§ C. QUANTUM MODEL OF A COLLECTIVE SPIN COUPLED TO AN OPTICAL CAVITY
The damping rate of the autonomous feedback system can be derived by considering how the system evolves in time.
Considering the Hamiltonian hamiltonian-feedback, and including terms accounting for pumping into and (non-Hermitian) leakage out of the cavity mode, the cavity field evolves according to
/ tĉ
= ( + ) ĉ
- κĉ
+ κη,
where η is the coherent-state amplitude of the field pumping the cavity.
Here, the field operator ĉ corresponds to the cavity field at frequency .
Without any coupling to the cavity, = 0, and field-evolution can be solved directly, giving
ĉ_0
= ηκ/κ - .
If the effects of the coupling between the cavity and the collective spin are small, the perturbation to the field can be approximated by
ĉ(t)
= ĉ_0 + ĉ'(t).
The collective spin, meanwhile, evolves according to
/ t = -F̂_y
+ ĉ^†ĉcos;
/ tF̂_y
=
+ ĉ^†ĉ( sin - cos);
/ t = - ĉ^†ĉsin.
For ≫ ||, as in our experiment, the transverse components admit solutions F̂_y ∝ F_⊥sin t.
For spins precessing at frequency at polar angle χ and azimuthal angle ϕ = t, the projection of the spin along the cavity axis looks like = F_⊥sincosϕ + cos, where F_⊥≡ F sinχ and F_z ≡ F cosχ.
Substituting this into field-evolution gives
/ tĉ'
= (
+ F_⊥sincos t
+ cos) ( ĉ_0 + ĉ' )
- κ( ĉ_0 + ĉ' )
+ κη.
To lowest order, it seems reasonable to expect a solution that looks like effective cavity drives at frequencies ± due to the modulation of the bare pump field ĉ_0 by the precessing spins.
This leads to the ansatz
ĉ'(t)
= ĉ_+ e^ t + ĉ_- e^- t.
Plugging this into field-equation-of-motion and ignoring quickly rotating terms ∼ e^± 2 t as well as terms of order [( / 2 κ) F_⊥sin]^2 gives
ĉ_0 = ηℒ( + F_z cos);
ĉ_+ = /2/κF_⊥sin ℒ( + F_z cos + )
ĉ_0;
ĉ_- = /2/κF_⊥sin ℒ ( + F_z cos - )
ĉ_0.
Here, ℒ() ≡κ / (κ - ) refers to a Lorentzian line at center frequency with width κ.
It is desirable to find the effect of the cavity field on the spin energy F_z.
This is given by spin-evolution, and depends on the instantaneous occupation number n̂≡ĉ^†ĉ of the cavity mode:
n̂ = n̂_0 +
( ĉ^†_+ ĉ_0 + ĉ_0^† ĉ_- ) e^- t +
( ĉ^†_- ĉ_0 + ĉ_0^† ĉ_+ ) e^ t
= n̂_0 -
/2/κF_⊥sinn̂_0
×[
ℒ(- - F_z cos + )
e^- t
-
ℒ ( + F_z cos + )
e^- t
+
ℒ (- - F_z cos - )
e^ t
-
ℒ( + F_z cos - )
e^ t]
Again, terms of order ( / κ)^2, corresponding to second-order sidebands, have been ignored.
Using the cycle-averages e^±ϕsinϕ = ± / 2, this gives the mean change in energy of the collective spin to be
/ t⟨F̂_z ⟩
= -1/2 F_⊥^2 sin^2 ^2/κ⟨ĉ_0^†ĉ_0 ⟩
×[
ℒ( + F_z cos + )
-
ℒ( + F_z cos - )
],
where the overline indicates time averaging over a Larmor precession cycle.
As expected, for < - F_z cos, the first Lorentzian term is larger, and F_z decreases; conversely, F_z increases for > - F_z cos.
The damping rate of the feedback system can be calculated as the exponential rate at which it approaches resonance in the limit F_z cos→ -.
In the unresolved sideband regime ≪κ, the asymmetry between the sidebands reduces to
ℒ( + F_z cos + )
- ℒ( + F_z cos - )
≈ -4 ( + F_z cos)/κ^2.
Using this approximation along with change-in-Fz and carrier-amplitude, and noting that η^2 = corresponds to the mean on-resonance photon occupation of the cavity, the damping rate of the system looks like
= - cos/ + F_z cos/ t⟨F̂_z ⟩
= - 2 F_⊥^2 sin^2 cos^3/κ^3,
where = κ^2 / (κ^2 + [ + F_z cos]^2) is the true cavity-filtered photon occupation of the cavity.
For the parameters used in our experiment, this amounts to a damping rate of = 1600.
Notably, however, these parameters do not fall well within the low-modulation regime used to approximate carrier-amplitude.
The inclusion of higher-order terms [( / 2 κ) F_⊥sin]^2 has the effect of reducing the carrier amplitude found in carrier-amplitude.
In particular, the full expression for the amplitude looks like
ĉ_0
= ηℒ̃(0)
×[
1 +
( F_⊥sin/2 κ)^2
ℒ̃(0)
[ ℒ̃() + ℒ̃(-) ]
]^-1,
where ℒ̃(ν) ≡ℒ( + F_z cos + ν) has been written for brevity.
For the parameters used in our experiment, this amounts to a correction factor of 0.7, resulting in a correction factor of 0.5 to and to the final damping rate: = 790.
The solutions given by ansatz-a and ansatz-b still ignore quickly rotating terms corresponding to higher-order sidebands; however, these effects are confirmed experimentally to be small.
In order to simulate the dynamics of the system, F_z ≡⟨F̂_z ⟩ can be treated as a c-number and gain-final can be used to propagate F_z forward in time.
In the absence of any dephasing, this treatment can be made complete by requiring that the total spin is conserved, F_z^2 + F_⊥^2 = 4 ^2.
Dephasing can be included heuristically by decreasing F_⊥ over time.
In practice, this decrease can take many functional forms, but a simple exponential decay captures much of the system dynamics.
Simulating the feedback process, then, amounts to propagating two coupled differential equations:
/ t F_z
= β(F_⊥) F_z;
/ t F_⊥
= -β(F_⊥) F_z^2/F_⊥ - Γ F_⊥.
The resulting values of F_z can be used as a model function for least-squares fitting, where Γ, as well as an overall offset to F_z which accounts for systematic measurement errors, are allowed to vary (s-curve-fits).
These fits are used to extract the sensitivities reported in dc-pullingd.
|
http://arxiv.org/abs/2307.05832v2 | 20230711225655 | Bag of Views: An Appearance-based Approach to Next-Best-View Planning for 3D Reconstruction | [
"Sara Hatami Gazani",
"Matthew Tucsok",
"Iraj Mantegh",
"Homayoun Najjaran"
] | cs.CV | [
"cs.CV",
"cs.AI",
"68T45",
"I.2.10; I.2.9; I.4.10; I.5.3"
] |
PREPRINT VERSION. JULY, 2023.
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Bag of Views: An Appearance-based Approach to Next-Best-View Planning for 3D Reconstruction
Sara Hatami Gazani,
Matthew Tucsok,
Iraj Mantegh,
and Homayoun Najjaran
The authors would like to acknowledge the funding from the National Research Council (NRC) Canada under the grant agreement DHGA AI4L-129-2 (CDB #6835).
S. Hatami and H. Najjaran are with the Department of Mechanical Engineering, University of Victoria, Victoria, BC, CA, V8P 5C2 (e-mail: [email protected]; [email protected]).
M. Tucsok is with the Okanagan School of Engineering, University of British Columbia, Kelowna, BC, CA, V1V 1V7 (e-mail: [email protected]).
I. Mantegh is with the National Research Council (NRC) Canada, QC, CA (e-mail: [email protected]).
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
UAV-based intelligent data acquisition for 3D reconstruction and monitoring of infrastructure has been experiencing an increasing surge of interest due to the recent advancements in image processing and deep learning-based techniques. View planning is an essential part of this task that dictates the information capture strategy and heavily impacts the quality of the 3D model generated from the captured data. Recent methods have used prior knowledge or partial reconstruction of the target to accomplish view planning for active reconstruction; the former approach poses a challenge for complex or newly identified targets while the latter is computationally expensive. In this work, we present Bag-of-Views (BoV), a fully appearance-based model used to assign utility to the captured views for both offline dataset refinement and online next-best-view (NBV) planning applications targeting the task of 3D reconstruction. With this contribution, we also developed the View Planning Toolbox (VPT), a lightweight package for training and testing machine learning-based view planning frameworks, custom view dataset generation of arbitrary 3D scenes, and 3D reconstruction. Through experiments which pair a BoV-based reinforcement learning model with VPT, we demonstrate the efficacy of our model in reducing the number of required views for high-quality reconstructions in dataset refinement and NBV planning[Authors have provided supplementary material including the scripts for the View Planning Toolbox available at https://github.com/ACIS2021/ViewPlanningToolboxhttps://github.com/ACIS2021/ViewPlanningToolbox.].
View planning, 3D reconstruction, intelligent data acquisition, aerial robotics autonomy.
§ INTRODUCTION
Active vision is characterized by the ability of a robot to make decisions about placing or reconfiguring its sensors to complement its perception of the environment <cit.>. This ability leads to meaningful actions of the robot based on interpretations of its surrounding environment and grants the robot a planning strategy, namely view planning, for actively replacing its sensors to uncover most amount of information about the target. In the context of active 3D reconstruction, view planning is used to optimize the robot's path until the task requirements are satisfied. For the application of 3D reconstruction of infrastructure using UAV-based imaging which is the concern of this work, the view planning problem dictates the data acquisition process and significantly impacts the reconstruction results. Previous work in this domain either relies on a given proxy of the target to build upon while planning the views <cit.> or generates a partial reconstruction using the knowledge of the agent about the target so the camera is navigated to complete the model <cit.>. In this setting, the agent iteratively calculates the next waypoint to attend where it can capture the next-best-view (NBV) with the highest predicted information gain. However, when the purpose is to capture views from newly recognized targets or in the case of targeting complex structures, a geometric proxy of the target might not be accessible and online 3D reconstruction to achieve guidance can be computationally expensive and time-inefficient. On the other hand, under the assumption that adequate computation resources exist onboard the drone, algorithms that use an external model for guidance purposes mostly focus on the coverage completeness of the area where less attention is paid to the relative visual information contained in consecutively captured views.
In this work, we propose a novel approach to a fully appearance-based view planning for online NBV planning and offline dataset refinement applications. The key uniqueness of our work lies on its model-free nature that makes it independent of the true state of the environment, namely the actual 3D model, both during training and inference time. This allows the method to be applied to a wide range of settings and customized to fit different applications. In order to introduce the concept, we draw parallels between computational representation of views and how humans perceive objects by recognizing and interpreting distinct visual cues <cit.>. We employ the Bag-of-Visual-Words <cit.> as a simplified vision model to represent an agent's knowledge of the target and introduce spatial information to the visual vocabularies to form a Bag-of-Views (BoV), a model to record and retrieve the visual features of the target from different viewpoints. First, through experimental results, we demonstrate how the selection of the views using our appearance-based heuristics affects the 3D reconstruction process and use the observations to refine already acquired datasets. Next, we bring this approach to an active level and utilize a Soft Actor-Critic (SAC) method <cit.> to train an agent that seeks to capture the target from views that cause a more drastic change in how it remembers the environment so far. The core idea of this method is to use the local visual features of the scene and their positioning to guide the agent to predict the NBV that would result in revealing the highest number of unseen visual features as the agent remembers them. In addition, we present View Planning Toolbox (VPT) as a comprehensive solution for simulating the view capturing process as part of developing view planning algorithms. The introduction of this toolbox fills the significant gap in the area of simulating machine learning-based view planning frameworks and provides developers with an open-source, user-friendly solution to streamline their training and evaluation process.
The main contributions of this work can be summarized as follows:
* Bag-of-Views, a novel fully appearance-based view selecting model for offline dataset refinement. This model needs no pre-training and is modular and customizable for different applications.
* A reinforcement learning approach to appearance-based NBV planning with no complete or partial reconstructions required at training or inference time.
* View Planning Toolbox (VPT, available https://github.com/ACIS2021/ViewPlanningToolboxhere), which provides an environment for training and testing of machine learning-based view planning and 3D reconstruction models.
§ RELATED WORK
§.§ Solutions to the View Planning Problem
Based on the amount of available information to the system about the environment and the complexity of the shape of the target, view planning approaches are often categorized as model-based or model-free (a.k.a. non-model-based) approaches <cit.>.
In model-based view planning problems, a viewing plan is obtained using a previously built or given model of the target <cit.>.
In more general applications and in cases where the target is unknown or introduced to the system in runtime, the viewing strategy should be generated without prior information of the target <cit.>. In such cases, the goal is to manipulate the camera position and orientation in a manner that most unrevealed information about the target is exposed to the agent in each step. Most of the model-free methods follow the NBV approach.
Typically, these systems build an interpretation of the environment using acquired information and base the planning of the next view(s) on it. That interpretation of the scene can be in different forms based on the specific task and application. Among methods that follow this approach are frontier-based methods and volumetric-based methods.
In frontier-based view planning first introduced in <cit.>, the core idea was to visit unexplored regions of an initial map, i.e. frontiers, to update the map based on the newly collected information in those regions. Methods belonging to this category usually represent the target zone of the environment using a 2D <cit.> or 3D <cit.> occupancy grid or directly use a point cloud to map the boundary between explored and unexplored regions <cit.>. As opposed to frontier-based methods that mostly focus on exploring an environment, volumetric-based approaches are usually concerned with modelling a single target and focus more on the completeness of the coverage. In this regard, to guarantee a successful registration, overlaps between the views are also considered while planning the views <cit.>. In addition to 3D modelling, such algorithms are also used for scene inspection <cit.>.
A subcategory of the model-free view planning algorithms, namely appearance-based planning methods, carries out the decision making process based solely on the visual input such as RGB or gray-scale images <cit.>. These methods either rely on an a priori model of the target or a partial reconstruction of the scene, at least during the training stage of their models. For example, a method was proposed in <cit.> which, based on the information gain from different camera poses, computes a candidate sequence of viewpoints for a micro aerial vehicle to attend. More recently, with the advancements of deep reinforcement learning, appearance-based view planning has been visited more often. Accordingly, <cit.> utilized only the captured images to plan the views without tracking a partial reconstruction of the true 3D model. They used the surface coverage percentage to guide the agent. Similarly, <cit.> used the surface coverage as well as reconstruction error as part of the reward for guiding their reinforcement learning agent towards complete reconstruction. Unlike these methods, our proposed model omits the need for the true state of the environment or any partial reconstructions for guiding the agent towards capturing high-utility views for the task of reconstruction. We treat 3D reconstruction as a downstream task as opposed to a parallel task in the view planning framework and pay more attention to the conditions that should be met by the views in order to have a satisfactory reconstruction.
§.§ Simulation Environments for The View Planning Problem
View planning research has explored various simulation environments, each exhibiting unique advantages and constraints. For instance, the PredRecon framework <cit.> utilized AirSim <cit.> for its simulation environment, taking advantage of its high-fidelity simulation capabilities. However, their work also relied on Unreal Engine 4 (UE4) <cit.> for data generation of other 3D models and Blender <cit.> for partial pointcloud reconstructions. This use of multiple software environments can be resource-intensive and complex to set up. In a similar manner, <cit.> used Gazebo <cit.>. While Gazebo is a feature-rich simulator providing a versatile environment for robotic simulations, using it for view planning in photo-realistic environments introduces computational overhead and performance degradation.
Previous work addressed the need for flexibility and customization by employing environments inspired by OpenAI Gym <cit.> to train their reinforcement learning agent for NBV planning <cit.>. Their approach offered greater flexibility in defining the simulation environment and incorporating different perceptual elements. Additionally, the gym-collision-avoidance package <cit.> used in <cit.> provided a simulation environment for informative trajectory planning. With a greater focus on collision avoidance, this package allowed researchers to simulate and evaluate view planning algorithms within a controlled environment.
It is worth noting that simulation environments often require trade-offs between fidelity, computational resources, and the level of control provided over environmental factors. Striking the right balance between these aspects is dependent on the specific task being studied. For view planning, factors affecting high-level control of the simulation are of most importance. These factors include camera controls (pose, focal length, resolution), scene manipulation (lighting, object placement and scaling), and scripting automation of the aforementioned functionalities. To address these trade-offs, we have developed the View Planning Toolbox (VPT), a comprehensive solution outlined in Sec. <ref>.
§ BAG OF VIEWS: APPEARANCE-BASED APPROACH FOR SELECTING BEST VIEWS
Following the work in <cit.>, we introduce a computational representation of the views in terms of the visual features of the scene captured from the respective viewpoints. As discussed in <cit.>, for a successful multi-view 3D reconstruction, two key conditions must be satisfied:
* The views in the set must present features of the target that are distinct from the ones presented by other views in the same set,
* Each view by itself must be rich in the number of visual features it is revealing of the target.
We denote the i^th view in the set as χ_i. Each view χ_i is encoded to and is represented by its extracted features using a feature extracting algorithm such as Scale-Invariant Feature Transform (SIFT) <cit.>. Thus, χ_i will be a 2D matrix with dimensions m× n where m is the number of extracted features and n is the number of values in the feature descriptors. In the case of SIFT, each row of this view matrix represents a 128-dimensional feature descriptor where each element represents a certain attribute of the local feature detected in the image patch. We can denote each view by iterating over its resulting feature descriptors χ_i(j, :) = {f(j,k): k∈{1,2,...,n}} where f(j,k) is the k^th value in the j^th descriptor of the image. These conditions for a set of views result in a greater distance between corresponding descriptors from two view representations denoted as dist(χ_i(j,:), χ_i+1(j',:)) for consecutively selected random views χ_i and χ_i+1. A greater distance ensures a better reconstruction quality for a limited-length trajectory. In our case, the cosine distance metric is used to measure the dissimilarity between the descriptors and the visual words since its consistent range of outputs allows for easy interpretation and score comparison. Thus,
dist(χ_i(j,:), χ_i+1(j':)) = 1 - 2×cos(χ_i(j,:), χ_i+1(j',:))
where j ∈{1,2,...,m} and j'∈{1,2,...,m'} with m and m' being the number of representative descriptors of χ_i and χ_i+1:
cos(χ_i(j,:), χ_i+1(j':)(j)) = ⟨χ_i(j,:) , χ_i+1(j',:)⟩/|χ_i(j,:)|.|χ_i+1(j',:)|
Since the extracted features belonging to a target are prone to self-similarity, the feature descriptors of different regions of the target can be clustered based on their similarity and, instead of pair-wise comparison of all feature descriptors belonging to all views in the set, they can be compared to the cluster cores. In addition, the necessity of there being a model-free view planning with no pre-training requires learning these cluster cores iteratively from the incoming information. This clustering of the visual features is inspired by <cit.> where cluster centers of the quantified feature descriptors were used to form Bag-of-Words.
In the context of view planning for a single target with one or more symmetry axes, learning a global visual vocabulary for the entire model can be prone to overconfidence in recognizing certain visual words. Therefore, we propose a method to track visual features of the target captured from different viewpoints via distinguished visual vocabularies for different regions included in what we call a Bag-of-Views (BoV). As well as mitigating the symmetry challenge, the computation cost will be significantly less with the viewpoint-based vocabularies; a smaller number of visual words is required to describe a portion of the target rather than all parts of it and the new view will only update its corresponding vocabulary among the BoV. Below are the steps involved:
1) Feature Extraction: Using a feature detection algorithm, in our case SIFT, local features of the captured image at position T are extracted in the form of 128-dimensional vectors. T is the position of the camera in the Cartesian coordinate system with its origin located at the center of the scene.
2) Utility Assignment: Depending on the application at hand, assigning utilities to the views can be divided into two different cases:
2.I) In the case of dataset refinement where the views have already been captured, a decision should be made about
including each view in the input set of the reconstruction algorithm based on its utility. The question simply is "does this new view help the reconstruction process?". To answer that, we look at the extent to which this view satisfies the two conditions mentioned before. Each of the feature descriptors from the previous step are compared with their closest visual word through vector quantization <cit.>. Then, a distance metric is used to measure the dissimilarity between feature descriptors and the supposed visual word that would represent them in the corresponding vocabulary in the BoV, denoted as ν_T. This process is repeated for all of the descriptors and the sum of the dissimilarity scores is used to decide whether to ignore or utilize the view in the reconstruction process. If the final score is a non-zero positive value, it is included in the set and proceeds to the third step, otherwise it is ignored.
2.II) In the second case, we use the BoV model to train a reinforcement learning agent to propose NBVs based on the appearance of consecutively captured views.
Further elaboration on the learning process will be provided in Sec. <ref>. In this section, our focus lies primarily on the utilization of this model to shape the reward function within the specified context.
While training the reinforcement learning agent, we use the change in the corresponding vocabulary of the BoV after capturing a new view at location T to shape the reward function. The higher difference between the new and previous BoV implies that the new view contains more unseen features and results in higher rewards.
3) View Representation: This step is a continuation of case 2.I. Depending on the position and orientation of the captured viewpoint, the extracted features update the specific vocabulary assigned to the range of views that the new view belongs to. This updating includes clustering of the descriptors belonging to that region using a clustering algorithm such as K-means <cit.>. Thus, every group of the cluster centers in the BoV, namely every regional vocabulary, describes the appearance of the target from viewpoints that are close in position and orientation.
Algorithm <ref> showcases the pseudo-code for the creating a BoV model.
§ A REINFORCEMENT LEARNING APPROACH TO APPEARANCE-BASED NBV PLANNING
The problem of NBV planning is a sequential decision making process that can be defined as a Partially Observable Markov Decision Process (POMDP) and be solved through reinforcement learning algorithms. Our goal is to achieve this without any need for a priori knowledge of the target and without any full or partial reconstruction of the target during training or inference time. We begin by formulating different components of this process. The goal of the agent in our system is to iteratively propose next views for a limited number of steps to reach regions with a high number of features unfamiliar to the BoV. Seeking such views leads to drastic changes in the vocabularies of the BoV through each relocation of the camera.
The state space should provide the agent with enough information about the environment to enable meaningful actions towards the goal. We use the concatenation of down-sampled gray-scale images captured through the last τ consecutive frames as well as the concatenation of the normalized camera locations associated with each view. Thus, the state s_t at time t is defined as {T_t-τ:t, obs_t-τ:t}. The camera location T_t is presented using the spherical coordinate system in the form of {R, ϕ, θ} with three values for radial distance from the center, the azimuth angle, and the elevation angle. Also, given a deterministic state transition, the action a_t determines the next camera location T_t+1 after being re-scaled to the specified ranges for its three components.
Based on the introduction in case I of Sec. <ref>, the reward received for this action represents the change in the part of the BoV that has been influenced by the new action, namely ν_T_t+1 which is the vocabulary associated with the region that the new location belongs to. This change is measured through comparing the same regional vocabulary before and after taking the action; the closest visual words in the two vocabularies are identified and their distance is measured through vector quantization with the cosine distance metric.
The sum of these distances is used to represent the change in the vocabulary after taking the action. We also add a negative constant reward at each time step. Thus, we define the reward to be r_t+1 = dist(ν_T_t+1, ν_T_t) - 1, with the distance between vocabularies being defined as:
dist(ν_T_t+1, ν_T_t) =
∑_i ( 1 - 2 ×cos( ν_T_t+1(i), ν_T_t(k) ) )
where k=jmaxcos(ν_T_t+1(i), ν_T_t(j)) and i iterates over each of visual words in ν_T. The number of vocabularies in the BoV and the size of each vocabulary are dependant on the resources available during training and runtime.
§ SIMULATION ENVIRONMENT
To address the need for a lightweight, flexible, and easy-to-integrate simulation environment, we introduce the View Planning Toolbox (VPT). VPT contains tools for simulating the components required to train, visualize, and test view planning algorithms entirely within Python <cit.> and Blender <cit.>. At the core of VPT is the UAV Camera, which simplifies the visual data acquisition process carried out by an Uncrewed Aerial Vehicle (UAV) down to a camera floating in 3D space, where the specifications and pose of the camera can be determined either manually, programmatically, or through the use of a view planning algorithm.
§.§ UAV Camera
The UAV Camera simulates the base functionality of a drone traversing a 3D scene to gather visual information including both RGB and depth images from the environment. It is a simple pinhole model where the developer specifies the resolution of the captured image in pixels, the focal length in millimeters, and the depth range in meters. The UAV Camera implementation encapsulates BlendTorch <cit.>, a framework designed to integrate PyTorch <cit.> deep learning models with Blender <cit.>. The BlendTorch framework provides a convenient, multiprocessing-safe means of communication between a Blender environments and native Python applications. In addition, the UAV Camera can be instantiated in an OpenAI gym <cit.> environment for use with reinforcement learning models.
§.§ Data Generation
VPT can serve a dual purpose as shown in Fig. <ref>:
* Generate state-action pair experiences for training reinforcement learning models. This requires accessing the UAV Camera directly which can be instantiated in a Gym environment.
* Generate offline datasets for training supervised learning models. This functionality is contained within Scanning Plans and adheres to traditional photogrammetric principles <cit.>, ensuring proper determination of end and side overlap for aerial scanning.
As a way of evaluating our appearance-based view planning model in Sec. <ref>, we compare the resultant reconstructions from the captured views with those generated by views that fully cover the targets. These views are generated using a hemispherical scan in which the end and side overlap between images is maintained at a distance R from the center of the scene. For calculating the spacing of views around the hemisphere, we make a flat plane approximation to determine the overlap between image pairs. Due to the normalization of the structure dimensions in our environment, the surface of the target structure appears far from the camera. As a result, the flat plane approximation does not lead to a lower overlap
First, the overlap between two adjacent views is calculated as OL = W/D tan(FOV/2) where W is the width of the view frame when projected onto the surface plane, D is the distance to the surface plane relative to the camera, and FOV is either the vertical or horizontal field of view (FOV) of the camera. Given a desired separation S between two viewpoints perpendicular to the surface plane, the overlap becomes:
OL = 2Dtan(FOV/2)/2Dtan(FOV/2)-S
We can relate the separation between the viewpoints with the desired overlap using Eq. <ref>:
S = 2Dtan(FOV/2)[1 - OL]
The latitudinal separation can be calculated using Eq. <ref> with the vertical FOV of the camera and a specified side overlap for OL. The latitudinal separation angle would be θ_s = arctan(S/R), which assumes a small-angle approximation due to the high overlap between adjacent views. However, the number of views n_v on a circular path depends on the height z of the path at each level of the hemispherical scan. This dependency is shown in Eq. <ref>:
n_v = 2π√(R^2-z^2)/S
Also, the longitudinal separation angle ϕ_s between views in the circular path is calculated using a small-angle approximation as:
ϕ_s = 2π/n_im = s/√(R^2-z^2)
Iterating θ_s over the range [0, π/2] and ϕ_s over the range [0, 2π] in a nested loop, we generate the trajectory of views as seen in Fig. <ref>.
Each scene contains a photo-realistic structure which has been centered about the z-axis and lies on the XY-plane. The structure is also normalized by its largest dimension. This ensures structures of all sizes can be fully visible from all viewpoints.
§.§ TSDF-Fusion 3D Reconstruction
To evaluate the performance of our view planning algorithms, VPT integrates a Truncated Signed-Distance Function (TSDF) Fusion 3D reconstruction module from <cit.>. A wrapper has been implemented to isolate the CUDA environment of the TSDF-Fusion module to prevent conflicts with PyTorch. The TSDF-Fusion module creates either a 3D point cloud or a 3D mesh using the marching cubes method <cit.>. TSDF-Fusion reconstruction requires RGB views, the corresponding depth maps, and the computed 4×4 rigid transformation matrices based on the poses for the corresponding views. This module acts as a lightweight photogrammetric stand-in to quickly evaluate the reconstruction quality from hundreds of views.
§ IMPLEMENTATION DETAILS AND EXPERIMENTS
§.§ Network Architecture and Training
The policy network is composed of two different sub-networks for processing the two components of the state; the observation network and the location network. Details of these two sub-networks are shown in Fig. <ref>.
We use the Soft Actor-Critic (SAC) algorithm by <cit.> to train the policy network. SAC is powered by the advantages of actor-critic methods and maximum entropy reinforcement learning. It is a model-free, off-policy algorithm that optimizes both the policy and the value function simultaneously through a combination of policy gradient and Q-value updates. It maximizes the expected return while incorporating a soft entropy regularization term <cit.>.
The critic network, which is responsible for mapping state-action pairs to their quality values, encodes the concatenation of the observation component of the state through a sequence of 3D convolutional layers followed by 3D batch normalization layers and ReLU activation functions. Then, the output is flattened and goes through two fully connected layers with an output size of 128. A sub-network also processes the concatenation of the location component of the state and the action through fully connected layers of output size 64 and 128, respectively. The resulting feature vector is then concatenated with the encoded observations and goes through two fully connected layers of output size 128 and 3 as the output quality value vector.
§.§ Evaluation of The View Planning Results
To evaluate the utility of the suggested views using the BoV model, we use the reconstruction method explained in Sec. <ref> and compare the resulting 3D point clouds and meshes from the captured views with those from a complete coverage scan of the target discussed in Sec. <ref>. We analyze the reconstruction results of the offline dataset refinement and online NBV planning using the Chamfer discrepancy, Hausdorff distance, and the mesh-to-mesh comparison carried out in CloudCompare <cit.>. Chamfer discrepancy gives an overall measure of similarity or dissimilarity between two point sets, while Hausdorff distance focuses on the largest observed distance, highlighting extreme differences. Comparing scanning results using these metrics, a higher Chamfer discrepancy indicates poor coverage of the target, while a higher Hausdorff distance suggests missed areas in scanning, resulting in reconstruction with holes.
§.§ Bag-of-Views for Dataset Refinement
To study the effect of the size of BoV, we test the model performance in filtering two of the generated datasets consisting of 288 views uniformly covering two of the buildings in our dataset as explained in Sec. <ref>. We used the method explained in Sec. <ref> and Alg. <ref> to reduce the dataset size and evaluate reconstruction performance resulting from the remaining views.
The two effective variables in this study are the number of view ranges defining the size of BoV and the number of visual words in each vocabulary of the BoV. The reconstruction results are compared based on their Hausdorff distance and Chamfer discrepancy with the 3D point cloud reconstructed using the original dataset of 288 views. The results are listed in Table <ref>. In addition to this quantitative comparison, we visualize the distance between the mesh reconstructions to those produced using the original dataset in Fig. <ref>.
This analysis explores the question of achieving optimal performance by balancing model efficiency (number of selected views) and reconstruction quality. Shown in Fig. <ref>, a larger BoV, achieved by increasing the number of vocabularies, results in a reconstruction that exhibits reduced spatial sparsity in the error surrounding the model.
This interprets as a more uniform scan of the target when seeking to update visual vocabularies that are defined for a smaller view range. This means that for a fixed number of words in each vocabulary, each view has less opponents to be compared with and the resulting visual words are more local to that region. While each view has a higher chance to represent a certain view range in the vocabulary, there is a lower chance for its similar views to be accepted into the set, as the dominant local features have already been identified. This effect is reinforced when the number of visual words per each vocabulary in the BoV is increased. The higher the number of visual words attributed to each view range, the higher the chance of familiarity of the newly captured view and details exposed to it.
§.§ Appearance-based Next-Best-View Planning: Baseline Comparison and Generalization
Based on the problem formulation in Sec. <ref> and the training algorithm in Sec <ref>, we trained an agent in a reinforcement learning cycle depicted in Fig. <ref> to scan multiple structures from the dataset we generated using VPT and publicly available 3D models.
We set the termination condition as the completion of one pass around the target to demonstrate the ability of the model to iteratively find the optimal views, and compare the resulting reconstructions with the results of a full coverage scan of the target as the baseline. The resultant trajectory for a sample from the dataset is shown in Fig. <ref>.
In addition, we used the policy in the first part trained on a single structure on two unknown buildings to compare the results and test the generalizablity of the learned policy. Shown in Fig. <ref>, the results demonstrate the efficacy of the model in finding the optimal views which result in high-quality reconstructions.
§ CONCLUSIONS AND FUTURE WORK
Our study tackled the challenge of model-free view planning by introducing a novel appearance-based computational representation of reconstruction targets that can be of utmost utility for UAV-based aerial photogrammetry. Our model enables utility assignment to the views without tracking a full or partial reconstruction of the target through tracking unfamiliar visual features with its vocabularies contained within what we called a Bag-of-Views (BoV). We also developed the View Planning Toolbox (VPT), which offers a comprehensive solution for training, evaluation, and custom dataset generation in the context of view planning and 3D reconstruction. Our first set of experiments focused on exploring the size of BoV and the vocabularies within it on reconstruction quality. Using the reconstruction results from a complete coverage scan of the target as a baseline, we found that the BoV model achieved a remarkable reduction of views used for reconstruction (70.6% decrease) while simultaneously reducing the reconstruction error (33.5% decrease). These outcomes showcased the efficacy of our model in identifying optimal views for reconstruction. Building upon this proof of concept, we extended the application of the BoV model to shape the reward of a reinforcement learning (RL) agent trained using the Soft Actor-Critic (SAC) algorithm for online NBV planning. Once again, our model yielded high-quality reconstructions with a significantly low number of views (down to 5% of the number of baseline views). Furthermore, the RL model exhibited substantial generalizability to unseen targets. Notably, we discovered that the degree of generalizability depended on the relative visual complexity of the training and testing environments, further validating the effectiveness of our appearance-based view selection approach. While our work primarily focused on 3D reconstruction, the modular nature of our proposed method lends itself well to customization for various other applications. Promising future research can include using custom feature extractors and pre-training the visual vocabularies of the BoV for tracking certain visual features associated with structural defects in infrastructure.
elsarticle-num
|
http://arxiv.org/abs/2307.08494v1 | 20230714100130 | Visual Explanations with Attributions and Counterfactuals on Time Series Classification | [
"Udo Schlegel",
"Daniela Oelke",
"Daniel A. Keim",
"Mennatallah El-Assady"
] | cs.HC | [
"cs.HC",
"cs.LG"
] |
Journal of Class Files, Vol. 14, No. 8, August 2015
Shell et al.: Bare Demo of IEEEtran.cls for Computer Society Journals
The abstract goes here.
Computer Society, IEEE, IEEEtran, journal, , paper, template.
Bare Demo of IEEEtran.cls for
IEEE Computer Society Journals
Michael Shell, Member, IEEE,
John Doe, Fellow, OSA,
and Jane Doe, Life Fellow, IEEE
M. Shell was with the Department
of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta,
GA, 30332.
E-mail: see http://www.michaelshell.org/contact.html
J. Doe and J. Doe are with Anonymous University.
Manuscript received April 19, 2005; revised August 26, 2015.
July 14, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
This demo file is intended to serve as a “starter file”
for IEEE Computer Society journal papers produced under using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
mds
August 26, 2015
§.§ Subsection Heading Here
Subsection text here.
§.§.§ Subsubsection Heading Here
Subsubsection text here.
§ CONCLUSION
The conclusion goes here.
§ PROOF OF THE FIRST ZONKLAR EQUATION
Appendix one text goes here.
§
Appendix two text goes here.
§ ACKNOWLEDGMENTS
§ ACKNOWLEDGMENT
The authors would like to thank...
1
IEEEhowto:kopka
H. Kopka and P. W. Daly, A Guide to , 3rd ed.1em plus
0.5em minus 0.4emHarlow, England: Addison-Wesley, 1999.
Michael Shell
Biography text here.
John Doe
Biography text here.
Jane Doe
Biography text here.
|
http://arxiv.org/abs/2307.04086v1 | 20230709024221 | Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement | [
"Tiancheng Sun",
"Zhishuai Ge",
"Xunzhou Chen",
"Shaolan Bi",
"Tanda Li",
"Xianfei Zhang",
"Yaguang Li",
"Yaqian Wu",
"Sarah A. Bird",
"Ferguson J. W.",
"Jianzhao Zhou",
"Lifei Ye",
"Liu Long",
"Jinghua Zhang"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Beijing Planetarium, Beijing Academy of Science and Technology, Beijing, 100044, China
[email protected]
Research Center for Intelligent Computing Platforms, Zhejiang Laboratory, Hangzhou 311100, China
[email protected]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
[email protected]
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
School of Physics and Astronomy, University of Birmingham, Birmingham, B15 2TT, United Kingdom
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Sydney Institute for Astronomy (SIfA), School of Physics, University of Sydney, NSW 2006, Australia
Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20
Datun Rd., Chaoyang District, Beijing 100101,
People’s Republic of China
Center for Astronomy and Space Sciences, China Three Gorges University, Yichang 443002, People's Republic of China
Department of Physics, Wichita State University, Wichita, KS 67260-0032, USA
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Institute for Frontiers in Astronomy and Astrophysics, Beijing Normal University, Beijing 102206, China
Department of Astronomy, Beijing Normal University, Beijing 100875, People’s Republic of China
Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, A20
Datun Rd., Chaoyang District, Beijing 100101,
People’s Republic of China
Varying oxygen abundance could impact the modeling-inferred ages. This work aims to estimate the ages of dwarfs considering observed oxygen abundance. To characterize 67,503 LAMOST and 4,006 GALAH FGK-type dwarf stars, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. Compared with ages determined with commonly-used α-enhanced models, we find a difference of ∼9% on average when the observed oxygen abundance is considered. The age differences between the two types of models are correlated to [Fe/H] and [O/α], and they are relatively significant on stars with [Fe/H] ≲ -0.6 dex. Generally, varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-1 < [Fe/H] < -0.2) stars by ∼15%. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The fractional age difference of high-O stars with [O/α] ∼ 0.4 dex reaches up to -33% to -42% at [Fe/H] ≲ -0.6 dex. We also analyze the chemical properties of these stars. We find a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr for the stars from the LAMOST and GALAH. The [O/Fe] of these stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, indicating that the younger population is more O-rich.
§ INTRODUCTION
Galactic archaeology uses the chemical abundances, kinematics, and derived ages of resolved stellar populations as fossils to investigate the formation and evolution history of the Milky Way <cit.>. However, in comparison to chemical abundance and kinematics estimation, estimating the ages of field stars is a challenging task due to the inherent uncertainties present in both observational data and the stellar models employed for dating stars <cit.>.
The chemical composition of a star is a fundamental input parameter in the construction of its theoretical model, which is critical in the determination of its age. Notably, at fixed [Fe/H], the abundance variations of individual elements exert a consequential impact on the overall metallicity Z, which subsequently determines the opacity of the stellar models. This, in turn, influences the efficiency of energy transfer and the thermal structure, thereby altering the evolution tracks on the HR diagram and the main-sequence lifetime <cit.>. Consequently, in the context of stellar modeling, it is essential to consider the proper metal mixture in order to accurately characterize stars and determine their ages.
The solar-scaled ([α/Fe] = 0) and α-enhanced mixtures have been commonly used in theoretical model grids like Y2 isochrones <cit.>, Dartmouth Stellar Evolution Database <cit.>, and Padova stellar models <cit.>. These models treated all the α-elements, that are O, Ne, Mg, Si, S, Ca, Ti, by the same factor.
Observations from high-resolution spectroscopic data have presented very different O-enhancement values from other α-elements on many stars <cit.>.
The observed discrepancies in the abundances of oxygen and other α-elements can be attributed to the diverse origins of these elements. Specifically, O and Mg are believed to be primarily synthesized during the hydrostatic burning phase of massive stars and subsequently ejected during the core-collapse supernovae (CCSNe) <cit.>. Nevertheless, some works have provided evidence that Mg might also be partially released into the interstellar medium by SNe Ia <cit.>, while O appears to be solely enriched by CCSNe <cit.>. The other α-elements, namely Si, Ca, and Ti, primarily originate from the explosive burning of CCSNe and are partially contributed by SNe Ia <cit.>.
For instance, 22% of Si and 39% of Ca come from SNe Ia according to the chemical evolution models in <cit.>.
Therefore, not all α-elements vary in lockstep, the abundance of oxygen may not necessarily correlate with the abundance of other α-elements.
Many works have also discussed the effects of varying individual element abundances on the stellar evolution models <cit.>. Theoretical models showed that the oxygen abundance influences the stellar evolution differently from the other α-elements <cit.>.
Furthermore, <cit.> proposed the CO-extreme models, which treat oxygen abundance differently from the other α-elements and add carbon abundance in the stellar evolution models. The models have been employed to determine the ages of thousands of metal-poor halo stars, disk stars, and main sequence turn-off stars <cit.>. These results showed that increasing oxygen abundance leads to smaller age determination for the stars with [Fe/H] < -0.2. For the stars with [Fe/H] < -0.2 and [O/α] > 0.2 dex, the age difference would be about 1 Gyr. Due to the limited sample sizes of previous studies (<cit.>, with 70 stars, and <cit.>, with 148 stars) or the restricted range of [Fe/H] values <cit.>, there is a pressing need for a large and self-consistent sample to conduct a quantitative analysis regarding the impact of O-enhancement on age determination.
Recently, millions of stars' individual element abundances have been measured by spectroscopic surveys like LAMOST <cit.>, APOGEE <cit.>, and GALAH <cit.>. These large sky surveys provide an excellent opportunity to study the effects of oxygen abundance variations on age determinations across a wide range of stellar parameters. To investigate the systematic effects of O-enhancement on age determination, we study the dwarf stars with available oxygen abundance measurements from LAMOST and GALAH. This paper is organized as follows: Section <ref> mentions the data selection; Section <ref> describes computations of stellar model grids; Section <ref> demonstrates ages differences between the O-enhanced models and α-enhanced models; the resulting age-abundance trends are presented in Section <ref>; and the conclusions of this work are drawn in Section <ref>.
§ TARGET SELECTION
In this work, we make use of spectroscopic data from LAMOST DR5 Value Added Catalogue <cit.> and Third Data Release of GALAH <cit.>,
together with astrometric data from Gaia Data Release 3 <cit.>.
§.§ Spectroscopic Data
LAMOST (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope) DR5 Value Added Catalog <cit.> contains more than 6 million stars with atmosphere parameters (T_ eff, log g, V_mic) and chemical abundances of 16 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni, Cu, and Ba). Measurements of element abundances are based on the DD–Payne tool <cit.>, which is a data-driven method that incorporates constraints from theoretical spectral models.
It is noteworthy that, as discussed by <cit.>, the direct derivation of oxygen abundances from atomic oxygen lines or oxygen-bearing molecular lines in low-resolution (R ∼ 1800) LAMOST spectra is unfeasible. Alternatively, CH and CN molecular lines can be utilized for indirect estimation of oxygen abundances, as their strengths are sensitive to the amount of carbon locked up in CO molecules. As a result, the LAMOST oxygen abundances are only available in the cooler stars (T_ eff ≲ 5700 K), where the CH and CN lines have sufficient strength to allow a reasonably precise (±0.10 dex) estimate of [O/Fe] <cit.>.
Due to the wide age range and the preservation of initial chemical abundances, the main-sequence star could be a good tracer of stellar populations. Therefore, we select the main-sequence stars with available measurements for [Fe/H], [α/Fe], and [O/Fe] from the catalog. Firstly, we use some recommended labels (T_ eff_flag = 1, log g_flag = 1, [Fe/H]_flag = 1, [X/Fe]_flag[[X/Fe]_flag = 1 for 14 elements (C, N, O, Na, Mg, Al, Si, Ca, Ti, Cr, Mn, Fe, Co, Ni).] = 1, qflag_chi2 = good) to select stars with reliable measurements. Afterward, we remove stars with T_ eff smaller than 5000 K or signal-to-noise ratio (S/N) less than 50 because their [O/Fe] determinations are not robust. <cit.> also provided a tag named “qflag_singlestar” to infer whether a star is single or belongs to a binary system. The tag is determined by the deviation significance of the spectroscopic parallax from the Gaia astrometric parallax. When the deviation is less than 3σ, it suggests an object is a single star. We use this tag to remove all candidate binaries from our sample.
Finally, we choose stars with log g> 4.1. We lastly select a total of 187,455 unique stars.
GALAH (Galactic Archaeology with HERMES) DR3 <cit.> presents stellar parameters (T_ eff, log g, [Fe/H], V_mic, V_broad, V_rad) and up to 30 elemental abundances for 588,571 stars, derived from optical spectra at a typical resolution of R ∼ 28,000.
The oxygen abundance from GALAH DR3 was calculated using the O_ I 777 nm triplet <cit.>, based on a non-LTE method (LTE: local thermodynamic equilibrium)<cit.>.
This NLTE method has also been employed for the measurement of [Fe/H] in GALAH.
Following the recommendations in GALAH DR3, we require a SNR > 30, and a quality flag = 0 for reliable stellar parameter determination including iron, α-elements, and oxygen abundances (flag_sp = 0, flag_fe_h = 0, flag_alpha_fe = 0, and flag_o_fe = 0). Additionally, the sample is limited to the stars with e_alpha_fe < 0.1 and e_o_fe < 0.1. We exclude the binary systems identified by <cit.> (which is a catalog of FGK binary stars in GALAH). These cuts give us a sample of 19,512 dwarf stars (log g> 4.1).
§.§ Astrometric Data
We cross-match our selected LAMOST and GALAH samples with Gaia DR3 <cit.> catalog to obtain the luminosity for each star. Given that luminosity is utilized as a key observational constraint for estimating stellar age, we select stars with luminosity uncertainty less than 10%. Additionally, we select single stars by making a cut based on the Gaia re-normalized unit weight error (RUWE) being less than 1.2 (RUWE values are from the Gaia DR3). Our final sample consists of 149,906 stars from LAMOST (5000 K < T_ eff < 5725 K, -1 < [Fe/H] < 0.5, log g> 4.1) and 15,591 stars from GALAH (4500 K < T_ eff < 7000 K, -1 < [Fe/H] < 0.5, log g> 4.1).
We calculate the Galactic Cartesian coordinates (X, Y, Z) and velocities (U, V, W) for the LAMOST sample using the Python package Galpy <cit.>. The distances are estimated by <cit.>. The Sun is located at (X, Y, Z) = (-8.3, 0, 0) kpc, and the solar motion with respect to the local standard of rest is (U_⊙, V_⊙, W_⊙) = (11.1, 12.24, 7.25) km s^-1 <cit.>. We use the Galactic Cartesian coordinates and velocities from the GALAH DR3 value-added catalog (VAC), which is based on astrometry from Gaia EDR3 and radial velocities determined from the GALAH spectra <cit.>.
In Figure <ref>, we demonstrate dwarfs from LAMOST and GALAH in the Kiel diagram, and the [α/Fe][The [α/Fe] from both the LAMOST and GALAH catalog are defined as an error-weighted mean of [Mg/Fe], [Si/Fe], [Ca/Fe] and [Ti/Fe].]-[O/Fe] space to inspect their general distributions.
The Kiel diagram in Figure <ref>(a) shows that most of the LAMOST dwarfs are cooler than 5700 K, while the GALAH dwarfs in Figure <ref>(b) covers a wider range of T_ eff (4500 - 7000 K). It should be noted that we do not apply any cut-off value at the high temperature side for the LAMOST sample. This upper limit is where reliable oxygen abundance can be measured by <cit.>.
The [α/Fe]-[O/Fe] diagrams in Figure <ref>(c-d) show that the [O/Fe] generally increases with increasing [α/Fe], however, [O/Fe] widely spread at given α-enhanced values. The spreading is relatively large for low-α stars (especially for the GALAH sample), ranging from -0.4 to +0.6.
c r c c[ht!]
Metal Mixtures for the GS98 Solar Mixture, the α-Enhanced Mixture, and the O-Enhanced Mixture.
Element log N_⊙ log N_α EM log N_ OEM
C 8.52 8.52 8.52
N 7.92 7.92 7.92
O 8.83 8.83+[α/Fe] 8.83+[O/Fe]
F 4.56 4.56 4.56
Ne 8.08 8.08+[α/Fe] 8.08+[α/Fe]
Na 6.33 6.33 6.33
Mg 7.58 7.58+[α/Fe] 7.58+[α/Fe]
Al 6.47 6.47 6.47
Si 7.55 7.55+[α/Fe] 7.55+[α/Fe]
P 5.45 5.45 5.45
S 7.33 7.33+[α/Fe] 7.33+[α/Fe]
Cl 5.50 5.50 5.50
Ar 6.40 6.40 6.40
K 5.12 5.12 5.12
Ca 6.36 6.36+[α/Fe] 6.36+[α/Fe]
Sc 3.17 3.17 3.17
Ti 5.02 5.02+[α/Fe] 5.02+[α/Fe]
V 4.00 4.00 4.00
Cr 5.67 5.67 5.67
Mn 5.39 5.39 5.39
Fe 7.50 7.50 7.50
Co 4.92 4.92 4.92
Ni 6.25 6.25 6.25
c r c[ht!]
Grid of Evolutionary Models with Two Metal Mixture Patterns.
Metal-mixture [O/Fe] [α/Fe]
(dex) (dex)
O-enhanced mixture -0.2 0
0.2 0
0.4 0
-0.1 0.1
0.3 0.1
0.5 0.1
0 0.2
0.4 0.2
0.2 0.3
0.4 0.3
0.5 0.3
0.6 0.3
α-enhanced mixture 0 0
0.1 0.1
0.2 0.2
0.3 0.3
r c c c[ht!]
Z Values of Fixed [Fe/H] with Two Metal Mixture Patterns.
[Fe/H] [α/Fe] [O/Fe] Z
(dex) (dex) (dex) (dex)
-1.0 0.1 0.1 0.0020
-1.0 0.1 0.5 0.0036
-0.8 0.1 0.1 0.0032
-0.8 0.1 0.5 0.0056
-0.6 0.1 0.1 0.0051
-0.6 0.1 0.5 0.0089
-0.4 0.1 0.1 0.0080
-0.4 0.1 0.5 0.0139
-0.2 0.1 0.1 0.0126
-0.2 0.1 0.5 0.0217
0 0.1 0.1 0.0197
0 0.1 0.5 0.0337
c c c c c c[ht!]
Atmosphere Parameters and Chemical Abundance for the Example Stars from LAMOST
Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe]
sobject_id (K) (dex) (L_⊙) (dex) (dex)
20140313-HD145243N315530B-01-084 5619±22 -0.30±0.04 0.74±0.02 0.06±0.02 0.46±0.09
20141112-HD083415N451147V01-03-165 5652±24 -0.15±0.04 1.57±0.03 0.15±0.02 -0.02±0.08
§ STELLAR MODELS
§.§ Input Physics
We construct a stellar model grid using the Modules for Experiments in Stellar Astrophysics (MESA) code <cit.>. The versions of MESA and MESA SDK we used are Revision 12115 and Version 20.3.1, respectively.
The EOS (Equation of State) tables in MESA are a blend of OPAL <cit.>, SCVH <cit.>, PTEH <cit.>, HELM <cit.>, and PC <cit.> EOS tables. Nuclear reaction rates are a combination of rates from NACRE <cit.>, JINA REACLIB <cit.>, plus additional tabulated weak reaction rates <cit.>. Screening is included via the prescription of <cit.>. Thermal neutrino loss rates are from <cit.>. The helium enrichment law is calibrated with initial abundances of helium and heavy elements of the solar model given by <cit.>, and it results in Y = 0.248 + 1.3324 Z. The mixing-length parameter α_ MLT is fixed to 1.82. Microscopic diffusion and gravitational settling of elements are necessary for stellar models of low-mass stars, which will lead to a modification to the surface abundances and main-sequence (MS) lifetimes <cit.>. Therefore, we include diffusion and gravitational settling using the formulation of <cit.>. We use the solar mixture GS98 from <cit.>. The opacity tables are OPAL high-temperature opacities [<http://opalopacity.llnl.gov/new.html>] supplemented by the low-temperature opacities <cit.>.
We customize metal mixtures by introducing two enhancement factors, one for oxygen and one for all other α-elements (i.e., Ne, Mg, Si, S, Ca, and Ti). The two factors are applied in the same way as <cit.> to vary the volume density of element (log N) based on the GS98 solar mixture as presented in Table <ref>.
We make a number of opacity tables by varying two enhancement factors according to the ranges of [α/Fe] and [O/Fe] values of the star sample. The enhancement values are shown in Table <ref>. For the mixtures with the same oxygen and α-elements enhancement factors, we refer to them as α-enhanced mixture (αEM), otherwise, as O-enhanced mixture (OEM).
c c c c c c c c c c[ht!]
Fundamental Parameters and Chemical Abundance for the Example Stars from GALAH
Star T_ eff [Fe/H] Luminosity [α/Fe] [O/Fe] Mass_α EM Mass_ Buder2021 Age_α EM Age_ Buder2021
sobject_id (K) (dex) (L_⊙) (dex) (dex) (M_⊙) (M_⊙) (Gyr) (Gyr)
171230005802396 6096±76 -0.23±0.06 2.26±0.07 0±0.02 0.02±0.08 1.06±0.03 1.03±0.04 6.08±1.01 6.46±1.17
160529003401378 5846±76 -0.42±0.06 1.67±0.03 0.31±0.03 0.34±0.09 0.97±0.03 0.96±0.03 9.53±1.26 10.04±1.39
* The masses (Mass_ Buder2021) and ages (Age_ Buder2021) of the two example stars from the GALAH value-added catalog <cit.> are calculated based on PARSEC stellar isochrones (the PAdova and TRieste Stellar Evolution Code) <cit.>.
§.§ Grid Computations
We establish stellar model grids that include various metal-mixture patterns as indicated in Table <ref>. The mass range is from 0.6 to 1.2 M_⊙ with a grid step of 0.02 M_⊙. Input [Fe/H] values range from -1.20 to +0.46 dex with a grid step of 0.02 dex. The computation starts at the Hayashi line and terminates at the end of main-sequence when core Hydrogen exhausts (mass fraction of center hydrogen goes below 10^-12).
The inlist file (for MESA) utilized in the computation of our stellar models is available on Zenodo: [doi:10.5281/zenodo.7866625]https://doi.org/10.5281/zenodo.7866625
To explicate the effect of oxygen enhancement on the evolutionary tracks, we provide an exposition of representative evolutionary tracks in Figure <ref>. The corresponding values of Z are listed in Table <ref>. At fixed [Fe/H], the variation of [O/Fe] would influence opacity, which could influence the energy transfer efficiency and the thermal structure.
We find that the larger [O/Fe] leads to higher opacity at input [Fe/H] ≤ -0.2, and shifts the evolutionary tracks to lower T_ eff.
As seen in Figure <ref>, at [Fe/H] ≤ -0.2, O-rich models are generally cooler than the α-enhanced models at given input [Fe/H], leading to higher modeling-determined masses (smaller ages) for a given position on the HR diagram (left panel of Figure <ref>). However, at input [Fe/H] = 0,
larger [O/Fe] leads to lower opacity, and shifts the evolutionary tracks to higher T_ eff.
The O-rich models are slightly hotter than the α-enhanced models.
Overall, at fixed mass, the T_ eff difference between the two models becomes significant with smaller [Fe/H].
In addition, we note that the 1.1 M_⊙ and 1.2 M_⊙ tracks of O-rich models show different behavior compared with the tracks of 0.7 ∼ 1.0 M_⊙. The O-rich models with 1.1 M_⊙ show a blue hook morphology at [Fe/H] ≤ -0.8, which enlarges the T_ eff difference between two models at this evolutionary phase. At 1.2 M_⊙, both models show a blue hook morphology at the end of main-sequence, and the T_ eff difference keeps approximately constant at [Fe/H] ≤ -0.6.
Figure <ref> presents the stellar evolution tracks of two example stars calculated with αEM and OEM models. Figure <ref>(a) presents the tracks of a star with observed [α/Fe] ∼ 0.1, [O/Fe] ∼ 0.5. Based on the αEM models (input [α/Fe] = 0.1, [O/Fe] = 0.1), we obtain the best-fit values of fundamental parameters for this star: mass = 0.87 ± 0.02 M_⊙, age = 8.69 ± 1.49 Gyr (the fitting method is described in detail in Section <ref>). Using the OEM models (input [α/Fe] = 0.1, [O/Fe] = 0.5), we estimate it to be a young star with mass = 0.90 ± 0.02 M_⊙, age = 5.68 ± 1.44 Gyr. The mean value of masses of OEM models ([O/Fe] = 0.5) inside the observational error box is larger than that of αEM models ([O/Fe] = 0.1), leading to smaller modeling-determined age for this star. Figure <ref>(b) shows the tracks of a star with observed [α/Fe] ∼ 0.2, [O/Fe] ∼ 0. We obtain a mass of 0.99 ± 0.01 M_⊙ and an age of 10.51 ± 0.60 Gyr for this star with αEM models (input [α/Fe] = 0.2, [O/Fe] = 0.2), and a mass of 0.98 ± 0.02 M_⊙ and an age of 11.34 ± 0.51 Gyr with OEM models (input [α/Fe] = 0.2, [O/Fe] = 0). As seen, the OEM models with input [O/Fe] = 0 are generally hotter than the αEM models ([O/Fe] = 0.2) at fixed mass and [Fe/H], leading to smaller modeling-determined mass and larger age for this star.
§.§ Fitting Method
We constrain stellar masses and ages using five observed quantities, i.e., T_ eff, luminosity, [Fe/H], [α/Fe], and [O/Fe]. Note that [O/Fe] is not used when estimating parameters with αEM models.
We follow the fitting method raised by <cit.>. According to the Bayes theorem, we compare model predictions with their corresponding observational properties D to calculate the overall probability of the model M_i with posterior probability I,
p(M_i| D,I)=p(M_i| I) p(D| M_i, I)/p(D| I)
where p(M_i | I) represents the uniform prior probability for a specific model, and p(D | M_i, I) is the likelihood function:
p(D| M_i,I)=L(T_eff,[Fe/H],lum)
=L_T_effL_[Fe/H]L_lum
The p(D | I) in Equation <ref> is a normalization factor for the specific model probability:
p(D | I)=∑_j=1^N_m p(M_j| I) p(D | M_j, I)
where N_m is the total number of selected models. The uniform priors p(M_i | I) can be canceled, giving the simplified Equation (1) as :
p(M_i| D, I)=p(D | M_i, I)/∑_j=1^N_m p(D | M_j, I).
Then Equation <ref> is the probability distribution for the selected models with the most probable fundamental parameters.
As demonstrated in Figure <ref>, we fit a Gaussian function to the likelihood distribution of mass and age for each star.
The mean and standard deviation of the resulting Gaussian profile are then utilized as the median value and uncertainty of fundamental parameter (mass and age) for each star.
To find the stars that locate near the edge of the model grid, we consider a 3-sigma error box (i.e., three times the observational error, depicted as a blue square in Figure <ref>) on the HR diagram and divide the error box into 100 bins. For a certain star, when there are more than 5 bins that do not contain any theoretical model (sampling rate < 95%), we flag the star with “edge effect”.
To assess the accuracy of our models and investigate potential model dependency in age and mass determination, we present a comparison of results obtained from our αEM models, OEM models, and the GALAH DR3 value-added catalog <cit.>. Figure <ref> shows the comparison of age and mass estimations for ∼4,000 GALAH stars, with age uncertainty of less than 30%, based on
αEM models, OEM models, and GALAH DR3 VAC <cit.>. The ages and masses of stars from GALAH DR3 VAC are calculated using the PARSEC (the PAdova and TRieste Stellar Evolution Code) release v1.2S + COLIBRI stellar isochrone <cit.>, which adopt a solar-scaled metal mixture, i.e., input [α/Fe] = 0. Figure <ref> illustrates that the one-to-one relation of the results is quite good for most stars. It is noteworthy that the adopted approach encompasses a flat prior on age with an age cap of 13.2 Gyr <cit.>. Consequently, the ages of the majority of stars from GALAH DR3 VAC are found to be younger than 12 Gyr (with masses larger than 0.8 M_⊙), which results in a relatively large dispersion of age differences, amounting to 12.4% for αEM models and 13.0% for OEM models.
Significant systematic differences are apparent between the PARSEC and the αEM models in Figure <ref>(a-b), with the former indicating 2.3% older age and 1.5% smaller mass than the latter.
These discrepancies could be attributed to differences in the input physics employed by the two models, such as the input [α/Fe] value, helium abundance, and mixing-length parameter.
In Figure <ref>(c-d), the PARSEC yields 5.5% older age and 1.9% smaller mass than the OEM models.
Compared with the αEM models, the OEM models demonstrate more pronounced systemic differences from PAESEC. These distinctions primarily arise from the consideration of O-enhancement in OEM models, leading to younger ages and higher masses.
In addition, a comparison of results obtained from our αEM models and the Yonsi–Yale <cit.> stellar isochrones have been shown in Figure <ref> in Appendix.
§ RESULTS
This work aims to determine the ages of dwarfs considering oxygen abundance and study the chemical and kinematic properties of high-α and low-α populations in the Galactic disk. We give the masses and ages of 149,906 LAMOST dwarfs and 15,591 GALAH dwarfs with αEM models and OEM models. We remove ∼30% stars with sampling rate < 95%, located near the edge of the model grid. In addition, we remove ∼3% stars whose inferred ages are 2-sigma[For a certain star, age - 2*age_uncertainty > 13.8 Gyr.] larger than the universe age <cit.> due to their significant model systematic bias. Finally, we remove ∼35% stars that have relative age uncertainty larger than 30 percent. After these cuts, we obtain the ages of 67,503 dwarfs from LAMOST with a median age uncertainty of ∼16%, and 4,006 dwarfs from GALAH with a median age uncertainty of ∼18%.
The age estimation of dwarf stars is inherently accompanied by considerable uncertainty, which can reach up to 30% within our sample. Furthermore, uncertainties (especially the systematic error) in atmosphere parameters can introduce biases in the age estimation. Consequently, a minority of stars in our sample exhibits ages that exceed the age of the universe. This occurrence is not uncommon, as even samples of subgiants with more precise age determinations have encountered analogous occurrences <cit.>.
§.§ Oxygen Effect on Age Determinations
§.§.§ Mock Data Test
Most of the stars in both the LAMOST and GALAH samples are distributed in a relatively narrow range of [Fe/H] (-0.5 dex - +0.5 dex).
To systematically investigate the effect of O-enhancement on age determinations in a wide range of T_ eff and [Fe/H], we apply a mock data test based on our grid of stellar models. For each set of stellar mode grids with fixed [Fe/H], [α/Fe], and [O/Fe] values, we draw random samples from the distributions of stellar evolution tracks in the H-R diagram.
We adopt 0.05, 30 K as the observational errors for [Fe/H] and T_ eff, and fractional error of 2% for luminosity. Finally, We generate mock data of 0.15 million stars with age uncertainty of less than 30 percent.
Figure <ref>(a) shows the distribution of mock stars on the HR diagram. Figure <ref>(b-c) presents a comparison between mock data and observational data for T_ eff and [Fe/H] distributions. Comparing mock data with LAMOST or GALAH dwarfs, mock stars cover wider ranges of T_ eff (5000 K - 7000 K), and [Fe/H] (-1.0 dex - +0.4 dex). Therefore, the mock data is useful for statistical studies of oxygen effect on age determinations.
Figure <ref> shows a comparison between ages determined with αEM models (τ_α EM) and OEM models (τ_ OEM). The mock stars are grouped by their [Fe/H] and [O/α] values. The stars with [O/α] > 0 are hereafter referred to as high-O stars and the stars with [O/α] < 0 as low-O stars. Generally, high-O stars have younger ages based on OEM models, while low-O stars become older. The effect of oxygen enhancement on age determination is relatively significant for stars with [Fe/H] < -0.2. At [O/α] = -0.2, the mean fractional age difference ( (τ_ OEM - τ_α EM)/τ_α EM ) is 10.5% for metal-rich stars (-0.2 < [Fe/H] < 0.2), and 15.5% for relatively metal-poor stars (-1 < [Fe/H] < -0.2). The mean fractional age difference at [O/α] = 0.2 is -9.2% for metal-rich stars, and -16.5% for relatively metal-poor stars.
The largest fractional age difference comes from high-O stars with [O/α] = 0.4, which have a mean fractional age difference of -20.2% at -0.2 < [Fe/H] < 0.2, and -30.6% at -1 < [Fe/H] < -0.2.
We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Increasing 0.2 dex in [O/α] will reduce the age estimates of metal-rich stars by ∼10%, and metal-poor stars by ∼15%.
The mock data provide us with more sufficient stars at the metal-poor edge than observational data to present clearly age differences at different [O/α] and [Fe/H] values.
§.§.§ Observational Data
Figure <ref> presents the fractional age differences between αEM and OEM models for observational (LAMOST and GALAH) and mock data. The overall average age offset (absolute value of age difference) of stars from LAMOST and GALAH is 8.9% and 8.6%, respectively. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%. The age offsets are relatively significant for metal-poor stars. The largest age differences are -33% to -42% for stars with [Fe/H] ≲ -0.6 dex and [O/α] ∼0.4 dex. For mock data, we note the trend of age offsets versus [Fe/H] is consistent with that of observational data. The age offsets of both samples increase significantly with decreasing metallicity at [Fe/H] ≳ -0.6. Interestingly, there is a slight increase in age offsets with decreasing metallicity at [Fe/H] < -0.6.
This trend of age offsets is consistent with the change of T_ eff difference as a function of [Fe/H] (shown in Figure <ref>), as discussed in Section <ref>.
§.§ Age-Abundance Relations
To trace the chemical evolution history of the Galactic disk, we hereby present the age-abundance relations of the LAMOST sample (consisting of 67,511 stars) and the GALAH sample (consisting of 4,006 stars) using the ages from OEM models. For each sample, we employ local nonparametric regression fitting (LOESS model) to characterize the trends in these relations with enhanced clarity.
Figure <ref> illustrates the results for the LAMOST sample. In Figure <ref>(a), a gradual decline in [Fe/H] is observed across the age range of ∼9 Gyr to ∼6.5 Gyr. This trend shows similarities to the metal-rich branch observed in young stars (age < 8 Gyr) as found by <cit.>, where the metallicity range of their metal-rich branch stars spans approximately -0.2 to +0.4. Notably, <cit.> also identifies a trend comparable to our findings, whereby their sample exhibits a [Fe/H] value of 0.4 at 8 Gyr, diminishing to around -0.2 at 6 Gyr. The "two-infall" chemical evolution model <cit.> predicts a process involving the infall of metal-poor gas commencing roughly 9.4 Gyr ago <cit.>. The observed trend of decreasing metallicity from 9 Gyr to 6.5 Gyr in our results may be related to this infalling metal-poor gas. Intriguingly, this "two-infall" model not only anticipates a decline in metallicity but also predicts an increase in the oxygen abundance,which is consistent with the observed trend illustrated in Figure <ref>(b). In Figure <ref>(b), the sample stars from LAMOST exhibit an increase in [O/Fe] as the age decreases from 9 Gyr to 4 Gyr, indicating a slight enrichment of oxygen in the younger stellar population.
Figure <ref> presents the results for the GALAH sample. It is noteworthy that the GALAH stars display a decrease in [Fe/H] from ∼7.5 Gyr to 5 Gyr. Furthermore, the [O/Fe] of the GALAH stars exhibit a slight decrease with age ranging from ∼7.5 Gyr to 3 Gyr. The GALAH sample exhibits age-[Fe/H] and age-[O/Fe] trends similar to those observed in LAMOST; however, an overall slight temporal discrepancy can be observed. This incongruity may be ascribed to dissimilarities in sample composition or systematic differences in atmospheric parameters between the two survey datasets. The GALAH sample, on the whole, exhibits higher temperatures compared to LAMOST sample (5000 - 5700 K), indicating a relatively younger population. Furthermore, the determinations of [Fe/H] and [O/Fe] from GALAH are based on a non-LTE method <cit.>, which can also impact the observed trends.
In conclusion, the analysis of the LAMOST and GALAH samples reveals a decreasing trend of [Fe / H] with an age ranging from 7.5–9 Gyr to 5–6.5 Gyr, and a notable upward trend in [O/Fe] as the age decreases from 7.5–9 Gyr to 3–4 Gyr. This result agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago. As discussed in Section <ref>, oxygen has a unique origin, primarily produced by CCSNe <cit.>. Therefore, the observed age-[O/Fe] trend plays a distinct role in characterizing the chemical evolution history of the Milky Way and constraining chemical evolution models.
Neglecting to account for the independent enhancement of oxygen abundance in age determination would result in significant age biases, as discussed in Section <ref>. Such biases would obscure the age-[O/Fe] relation, as depicted in Figure <ref> in the appendix, where the rising trend of [O/Fe] with decreasing age remains imperceptible at age < 9 Gyr. Therefore, we suggest that considering the oxygen abundance independently in stellar models is crucial. This would aid in accurately characterizing the age-[O/Fe] relation and provide better constraints for Galactic chemical evolution models.
§ CONCLUSIONS
To determine the ages of dwarfs considering observed oxygen abundance, we construct a grid of stellar models which take into account oxygen abundance as an independent model input. We generate mock data with 0.15 million mock stars to systematically study the effect of oxygen abundance on age determination. Based on the α-enhanced models and O-enhanced models, we obtain the masses and ages of 67,503 stars from LAMOST and 4,006 stars from GALAH and analyze the chemical and kinematic properties of these stars combined with ages from O-enhanced models.
Our main conclusions are summarized as follows:
(1) The ages of high-O stars based on O-enhanced models are smaller compared with those determined with α-enhanced models, while low-O stars become older. We find clear age offsets that correlate to the [Fe/H] and [O/α] values. Varying 0.2 dex in [O/α] will alter the age estimates of metal-rich (-0.2 < [Fe/H] < 0.2) stars by ∼10%, and relatively metal-poor (-0.2 < [Fe/H] < 0.2) stars by ∼15%.
(2) The overall average age offset (absolute value of age difference) between α-enhanced models and O-enhanced models is
8.9% for LAMOST stars, and 8.6% for GALAH stars. Of the low-O stars with [Fe/H] < 0.1 dex and [O/α] ∼ -0.2 dex, many have fractional age differences of ≥ 10%, and even reach up to 27%. The mean fractional age difference of high-O stars with [O/α] ∼0.4 dex is ∼ -25%, and reach up to -33% to -42% at [Fe/H] ≲ -0.6 dex.
(3) Based on LAMOST and GALAH samples, we observe a decreasing trend of [Fe/H] with age from 7.5–9 Gyr to 5–6.5 Gyr. Furthermore, The [O/Fe] of both sample stars increases with decreasing age from 7.5–9 Gyr to 3–4 Gyr, which indicates that the younger population of these stars is more O-rich. Our results agree with the prediction of the "two-infall" scenario and suggest that a metal-poor and O-rich gas gradually dominates the star formation from 7.5–9 Gyr ago.
We thank the anonymous referee for valuable comments and suggestions that have significantly improved the presentation of the manuscript. This work is based on data acquired through the Guoshoujing Telescope. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope; LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
This work used the data from the GALAH survey, which is based on observations made at the Anglo Australian Telescope, under programs A/2013B/13, A/2014A/25, A/2015A/19, A/2017A/18, and 2020B/23.
This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
This work is supported by National Key R&D Program of China No. 2019YFA0405503, the Joint Research Fund in Astronomy (U2031203,) under cooperative agreement between the National Natural Science Foundation of China (NSFC) and Chinese Academy of Sciences (CAS), and NSFC grants (12090040, 12090042). This work is partially supported by the CSST project, and the Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002). This paper has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (CartographY GA. 804752).
Figure <ref> depicts the age and mass determinations for ∼15,000 LAMOST stars (with [α/Fe] ∼ 0.1) and reveals a satisfactory correspondence between the αEM models and the YY isochrones <cit.>, as the dispersion of the relative age and mass differences are only 6.4% and 1.1% between these two models. However, slight systematic differences are visible among this result, as the YY yields 3.6% older age and -0.4% smaller mass than the αEM models.
aasjournal
|
http://arxiv.org/abs/2307.07454v1 | 20230714162724 | Generative adversarial networks for data-scarce spectral applications | [
"Juan José García-Esteban",
"Juan Carlos Cuevas",
"Jorge Bravo-Abad"
] | physics.optics | [
"physics.optics",
"cond-mat.other",
"cs.LG",
"physics.comp-ph"
] |
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC),
Universidad Autónoma de Madrid, E-28049 Madrid, Spain
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC),
Universidad Autónoma de Madrid, E-28049 Madrid, Spain
Departamento de Física Teórica de la Materia Condensada and Condensed Matter Physics Center (IFIMAC),
Universidad Autónoma de Madrid, E-28049 Madrid, Spain
Generative adversarial networks (GANs) are one of the most robust and versatile techniques in the field of generative
artificial intelligence. In this work, we report on an application of GANs in the domain of synthetic spectral data
generation, offering a solution to the scarcity of data found in various scientific contexts. We demonstrate the proposed
approach by applying it to an illustrative problem within the realm of near-field radiative heat transfer involving a
multilayered hyperbolic metamaterial. We find that a successful generation of spectral data requires two modifications
to conventional GANs: (i) the introduction of Wasserstein GANs (WGANs) to avoid mode collapse, and, (ii)
the conditioning of WGANs to obtain accurate labels for the generated data. We show that a simple feed-forward neural network (FFNN), when augmented
with data generated by a CWGAN, enhances significantly its performance under conditions of limited data availability,
demonstrating the intrinsic value of CWGAN data augmentation beyond simply providing larger datasets. In addition, we
show that CWGANs can act as a surrogate model with improved performance in the low-data regime with respect
to simple FFNNs. Overall, this work highlights the potential of generative machine learning algorithms in scientific
applications beyond image generation and optimization.
Generative adversarial networks for data-scarce spectral applications
J. Bravo-Abad
August 12, 2023
=====================================================================
§ INTRODUCTION
Machine learning, a rapidly expanding area within computer science, focuses on advancing the foundations and technology
that enables machines to learn from data <cit.>. Deep
learning, on the other hand, is a subset of machine learning techniques that uses artificial neural networks (ANNs) to
model and solve complex data-driven problems, such as image and speech recognition <cit.>, natural
language processing <cit.>, and autonomous driving <cit.>, among many others. With the current explosion of data and the rapid development of improved hardware and
algorithms, machine learning and deep learning are becoming crucial tools in many industries, including healthcare,
finance, and manufacturing.
Motivated by this success, machine learning and deep learning techniques are attracting increasing attention from a
variety of scientific disciplines beyond computer science, revolutionizing traditional approaches to the modeling and
analysis of data-driven scientific problems. In physics, these techniques are employed to tackle complex problems <cit.>,
including the representation of quantum many-body wave functions <cit.>, the discovery and identification of phase transitions in condensed-matter systems <cit.>, the solution of statistical problems <cit.>, the development of novel quantum information technologies <cit.>, the modeling of gravitational waves <cit.>, and
the design of nanophotonic devices with novel or improved functionalities <cit.>. Machine learning algorithms
have been effectively used for the accelerated discovery and design of new materials and molecules in the fields of materials science and
chemistry <cit.>, being instrumental in molecular dynamics simulations <cit.>, in predicting chemical reactions <cit.>, and in modeling the quantum mechanical energies of molecules <cit.>. In the realm
of biology, the applications of machine learning and deep learning are also vast, including breakthroughs in gene expression prediction tasks, prediction of micro-RNAs targets, and novel single-cell methods <cit.>. Overall, despite significant challenges, such as the interpretability and
transferability, machine learning and deep learning are playing an increasingly pivotal role in the advance of a
broad variety of scientific research methodologies.
In this context, generative adversarial networks (GANs), a powerful subset of generative machine learning, have emerged as a versatile
tool for creating new data instances that closely resemble a given training set <cit.>. This innovative
paradigm involves a two-player adversarial setup where a generative ANN strives to produce data instances that are
indistinguishable from the training set, while a discriminative network attempts to distinguish between the instances
generated by the generative network and the real data <cit.>. The competing nature of these two networks
drives the generative network to generate increasingly realistic data, pushing the boundaries of what is achievable
with generative models. This technology has been instrumental across a broad range of scientific disciplines,
including physics, chemistry, and biology. In physics, GANs have been used for simulating complex systems and predicting
outcomes of experiments, with examples in high energy physics <cit.>, condensed matter physics <cit.>, nanophotonics <cit.>,
and cosmology <cit.>. In the field of chemistry, GANs have been harnessed to generate
novel chemical structures and predict their properties <cit.>, thereby accelerating the process of drug discovery and materials
design <cit.>. In biology, GANs have been employed in a variety of tasks, including protein engineering <cit.> and generate biological imaging data <cit.>. This myriad of applications demonstrate the significant potential of GANs in transforming scientific
research by providing a powerful tool for hypothesis generation, experimental design, and data augmentation where
empirical data is scarce or expensive to obtain. Despite their significant success in image generation, the use of GANs has been mostly limited to this area. It would be highly desirable to see their application more widely spread in the generation of scientific numerical data.
In this work, we introduce a novel application of GANs for synthetic spectral data generation. This offers a solution
to the data scarcity found in scientific contexts where collecting a significant amount of spectral signals is critical
for a subsequent application of data-driven approaches. Such a scenario is common across a wide range of fields,
including physics, chemistry, astronomy, biology, medicine, and geology. Here, we particularly focus
on an illustrative problem in the research area of near-field radiative heat transfer, involving a multilayered
hyperbolic metamaterial. We explore the use of a Conditional Wasserstein GAN (CWGAN) for data augmentation and
investigate its impact on the predictive capabilities of a feed-forward neural network (FFNN). We find that the successful production of spectral data requires two main changes to standard GANs. Firstly, the implementation of Wasserstein GANs (WGANs) is necessary to counteract mode collapse, and secondly, these WGANs need to be conditioned to yield accurate labels for the generated data. We demonstrate that a simple FFNN, when augmented with data produced by a CWGAN, notably improves its performance under conditions of data scarcity. This underscores the intrinsic value of CWGAN data augmentation, not just as a means to expand datasets. Furthermore, we illustrate that CWGANs have the ability to serve as efficient surrogate model in low-data regimes. Overall, this research work contributes to highlighting generative AI algorithms'
potential in applications extending beyond the conventional realm of image generation. We also anticipate that our
findings will contribute to advancing the understanding and application of generative AI algorithms in data-limited
scientific contexts.
This work is organized as follows. In Section <ref>, we review the fundamentals of the primary generative adversarial frameworks underlying this work. Section <ref> discusses the basic principles of the particular physical problem we utilize to exemplify our approach. In Section <ref>, we present and discuss the results obtained from implementing the generative adversarial method detailed in Section 2 to the specific example problem outlined in Section 3. Finally, in Section <ref> we sum up the conclusions of this work.
§ GENERATIVE ADVERSARIAL ROUTE TO SYNTHETIC SPECTRAL DATA GENERATION
In this Section, we review the basics of the three main generative adversarial frameworks that form the basis of this
work, namely, GANs, Wasserstein GANs (WGANs) and Conditional WGANs (CWGANs). Here, we assume that the origin of the
real spectra used in these approaches is completely general, coming from any from physical, chemical or biological
process (in the following Sections we discuss the application to a particular problem within the realm of near-field
radiative heat transfer). Figure <ref> shows a schematic representation of the underlying architecture of a
general GAN algorithm. It comprises two main parts: a generator network and a discriminator network (represented as
orange and blue rectangles, respectively, in Fig. <ref>). The generator and the discriminator are two
interconnected networks in the GAN system. The generator generally begins with random noise (z in Fig. <ref>) and
uses it to create new data samples (new spectra in our application). On the other hand, the discriminator takes these spectra
and calculates the likelihood that each one originates from the actual training data set. The two networks have competing
objectives. The generator's goal is to create spectra that perfectly mirrors the distribution of the training data.
In doing so, it aims at generating spectral data so convincingly authentic that the discriminator cannot tell it apart
from the real training data. Meanwhile, the discriminator aims at distinguishing between the actual training data and
the data fabricated by the generator. Both networks are trained together in a competition until a Nash-type equilibrium
is reached and the training process ends <cit.>. At that stage, the generator is producing spectra that
the discriminator can no longer reliably classify as `real' or `fake', signaling the end of the training process.
Mathematically, the GAN configuration can be formulated as a minimization problem of the generator and discriminator
loss functions, suitably written in terms of differentiable functions representing the discriminator and the generator
network models, D( x; θ_d) and G( z; θ_g), respectively (θ_d and
θ_g are the parameters of the corresponding ANNs —in what follows, for the sake of clarity, we do not include
this dependence in the equations). We start by focusing on the loss function of the discriminator, L_D(G,D), which can
be written as <cit.>
L_D(G,D) = - 𝔼_ x∼ p_data( x)[log(D( x))] -
𝔼_ z∼ p_z( z)[log(1-D(G(z)))] ,
where 𝔼 stands for expectation over either the training samples, x, or the input noise variables,
z (characterized respectively by a probability distribution p_data( x) and a prior distribution
p_z( z)). From Eq. (<ref>), and considering that D outputs a single scalar representing a probability,
we can see that the training of the discriminator is trying to minimize the likelihood of mistaking a real sample for a
fake one or a fake sample for a real one (first and second terms, respectively, of the right-hand side of
Eq. (<ref>)).
However, there are significant performance issues associated to this original choice of the loss function L_D(G,D),
including the difficulty to reach the above-mentioned Nash equilibrium state (each network is updated independently,
and given the competitive nature of the generator and the discriminator, in general, there is no clear point to stop
the training) or the so-called mode collapse (arising when a network fails to generalize accurately to all regions
of the training data distribution). To address these issues, a different strategy to train GANs was introduced, the
so-called Wasserstein Generative Adversarial Networks (WGANs) <cit.>. One of the main
modifications of WGANs with respect to the original GAN architecture is the presence of a critic model, C( x),
instead of the discriminator model. Importantly, C( x) outputs a score instead of a probability, which, in turn,
allows us to define a new loss function for the critic, L_C(G,C), given by
L_C(G,C) = -𝔼_x∼ p_data( x)[C( x)] +
𝔼_z∼ p_z( z)[C(G( z))] .
This loss function summarizes well some of the main advantages of the WGANs. WGANs help address the training challenges
of GANs by using the Wasserstein distance (or Earth Mover's distance) as the loss function instead of the Jensen-Shannon
divergence used in original GANs <cit.>. In addition, it provides a meaningful loss metric, i.e., the
value of the critic in WGANs provides a meaningful measure of the distance between the real and generated data
distributions. This is unlike the original GANs where the discriminator's output does not correlate well with the quality
of the generated samples.
A pivotal aspect of WGANs is the need for the Critic to operate within the set of the so-called 1-Lipschitz
functions, a critical component of the model <cit.>. Lipschitz functions are mathematical functions
possessing a property where there exists a real-valued constant such that, for every pair of points, the absolute
difference in function values can be bounded by this constant times the absolute difference of input values. When
this constant is 1, the functions are known as 1-Lipschitz. The 1-Lipschitz constraint is crucial as it bounds how
much a function's output can change with small variations in input, ensuring the function does not change too abruptly.
In WGANs, this is vital to ensure that the Critic provides meaningful and stable gradients for the generator to learn from,
facilitating a more reliable learning process. Originally, weight clipping was proposed to enforce this 1-Lipschitz
condition. However, this sometimes resulted in convergence failure <cit.>. To counter these issues,
a more robust technique known as the gradient penalty was introduced. This method involves adding a loss term
to maintain the L2 norm (a measure of the vector length of parameters or weights) of the Critic close to a value of
1 <cit.>. This approach assists in keeping the Critic's function within the 1-Lipschitz constraint,
enhancing the stability and performance of WGANs. Incorporating these improvements, the full loss for the critic in a
WGAN now reads
L_C(G,C) = -𝔼_ x∼ p_data( x)[C( x)] +
𝔼_ z∼ p_z( z)[C(G( z))] +
λ 𝔼_x̂∼ p_x̂[( ||∇_x̂C(x̂)||_2-1)^2 ] ,
where ||·||_2 is the L2 norm and λ is a weight parameter for the gradient penalty
(throughout this work we assume λ = 10, as pointed out in Ref. <cit.>), and x̂ = x +
α(G(z)-x) is an interpolated point between a real sample and generated sample on which to calculate the gradient,
with α∈ [0,1]. As a final remark, proper training of the WGAN requires the critic to be trained ahead of the
generator, so that for each training step of the generator the critic is updated n_ train times
(following <cit.>, we chose n_ train = 5 in all our models).
As for the generator loss function, L_G(G,C), an important aspect to realize is that for synthetic spectral data
generation our focus is on a regression problem. This implies that we need a greater control over the output than
that obtained just by acquiring a random but realistic sample. It is therefore crucial to ensure that the generated
data accurately corresponds to the correct system parameters that yield that response. To achieve this, we need to
condition the WGAN, leading to the creation of a Conditional WGAN (CWGAN) <cit.>. In this work,
we will implement that by adding an extra loss term quantifying the Mean Absolute Error (MAE) between the conditioned
generated example and the training example corresponding to the ground truth of the condition. Accounting for
this conditioning, L_G(G,C) can be expressed as
L_G(G,C) = 𝔼_ z∼ p_z( z), x∼ p_data( x)[ | x - G( z| W)| ] -
𝔼_ z∼ p_z( z)[C(G( z))] ,
where x is the training example corresponding to the system parameters W, and G( z| W)
is a generated example conditioned on the same parameters W. The first term in the r.h.s. of Eq. (<ref>)
corresponds to the above discussed conditioning procedure, while the second term is associated to the coupling of the
generator and the critic.
§ ILLUSTRATIVE PROBLEM: NEAR-FIELD RADIATIVE HEAT TRANSFER SPECTRA IN MULTILAYER HYPERBOLIC METAMATERIALS
In this Section we provide an overview of the fundamentals of the specific problem we use to illustrate the proposed
approach. We have chosen a physical problem in the context of near-field radiative heat transfer involving multilayer
hyperbolic metamaterials <cit.>. Despite the specific character of this class of systems, the
chosen problem can be considered both representative of the types of problems that our approach can address effectively
and complex enough to showcase the versatility of our method.
One of the major advances in recent years in the field of thermal radiation has been the experimental confirmation
that the limit set by Stefan-Boltzmann's law for the radiative heat transfer between two bodies can be largely overcome
by bringing them sufficiently close <cit.>. This phenomenon is possible because in the near-field regime,
i.e., when the separation between two bodies is smaller than the thermal wavelength λ_ Th (∼10 μm
at room temperature), radiative heat can also be transferred via evanescent waves (or photon tunneling). This new
contribution is not taken into account in Stefan-Boltzmann's law and dominates the near-field radiative heat transfer
(NFRHT) for sufficiently small gaps or separations <cit.>. Among the different
strategies that have been recently proposed to further enhance NFRHT, one of the most popular ideas is based on the use
of multiple surface modes that can naturally appear in multilayer structures. In this regard, a lot of attention has
been devoted to multilayer systems where dielectric and metallic layers are alternated to give rise to the so-called
hyperbolic metamaterials <cit.>. The hybridization of surface modes appearing in different
metal-dielectric interfaces have indeed been shown to lead to a great enhancement of the NFRHT, as compared to the case
of two infinite parallel plates <cit.>.
Following Ref. <cit.>, we consider here the radiative heat transfer between two identical multilayer
structures separated by a gap d_0, as shown in Fig. <ref>(a). Each body contains N total layers alternating
between a metallic layer with a permittivity ϵ_m and a lossless dielectric layer of permittivity
ϵ_d. The thickness of the layer i is denoted by d_i and it can take any value within a given
range (to be specified below). While the dielectric layers will be set to vacuum (ϵ_d =1), the
metallic layers will be described by a permittivity given by a Drude model: ϵ_m(ω) =
ϵ_∞ - ω^2_p/[ω (ω + i γ)], where ϵ_∞ is the permittivity at
infinite frequency, ω_p is the plasma frequency, and γ de damping rate. From now on, we set
ϵ_∞ = 1, ω_p = 2.5 × 10^14 rad/s, and γ = 1 × 10^12 rad/s (these
parameters provide a surface plasmon frequency similar to the surface phonon-polariton frequency of the interface
between SiC and vacuum).
We describe the radiative heat transfer within the framework of fluctuational electrodynamics
<cit.>, particularly focusing on the near-field regime. In this regime, the radiative heat transfer
is dominated by TM- or p-polarized evanescent waves and the heat transfer coefficient (HTC) between the two bodies,
i.e., the linear radiative thermal conductance per unit of area, is given by <cit.>
h = ∂/∂ T∫^∞_0 dω/2π Θ(ω, T)
∫^∞_ω/cdk/2π k τ_p(ω, k) ,
where T is temperature, Θ(ω, T)= ħω/ (e^ħω/ k_ B T -1) is the mean thermal
energy of a mode of frequency ω, k is the magnitude of the wave vector parallel to the surface planes, and
τ_p(ω, k) is the transmission (between 0 and 1) of the p-polarized evanescent modes given by
τ_p(ω, k) = 4 [ { r_p(ω,k) }]^2 e^-2q_0 d_0/| 1 - r_p(ω,k)^2 e^-2 q_0 d_0 |^2 .
Here, r_p(ω,k) is the Fresnel reflection coefficient of the p-polarized evanescent waves from the vacuum
to one of the bodies and q_0 = √(k^2 - ω^2/c^2) (ω/c < k) is the wave vector component normal to
the layers in vacuum. The Fresnel coefficient needs to be computed numerically and we have done it by using the
scattering matrix method described in Ref. <cit.>. In our numerical calculations of the HTC we also
took into account the contribution of s-polarized modes, but it turns out to be negligible for the gap sizes explored
in this work.
Let us briefly recall that, as explained in Ref. <cit.>, the interest in the NFRHT in these multilayer
structures resides in the fact that the heat exchange in this regime is dominated by surfaces modes that can be shaped
by playing with the layer thicknesses. In the case of two parallel plates made of a Drude metal, the NFRHT is dominated
by the two cavity surface modes resulting from the hybridization of the surface plasmon polaritons (SPPs) of the two
metal-vacuum interfaces <cit.>. These two cavity modes give rise to two near-unity lines in the
transmission function τ_p(ω, k). Upon introducing more internal layers with appropriate thicknesses, one can
have NFRHT contributions from surface states at multiple surfaces, as we illustrate in Fig. <ref>(b) for the case of N=8
layers (4 metallic and 4 dielectric layers), separated by a vacuum gap of d_0 = 10 nm at T=300 K. As shown in
Ref. <cit.>, the contribution of these additional surface states originating from internal layers can
lead to a great enhancement of the NFRHT as compared to the bulk system (two parallel plates) in a wide range of gap
values.
Of special interest for this work is the spectral HTC, h_ω, defined as the HTC per unit of frequency: h =
∫^∞_0 h_ω dω. To create the initial dataset of real spectra (which later on will be augmented
by our generative adversarial approach), we apply the above-described theoretical framework to compute a total of 6,561
h_ω spectra. The thicknesses d_i of each layer were varied between 5 and 20 nm, and every spectrum contains 200
frequency points in the range ω∈ [0.3, 3] × 10^14 rad/s. As discussed in the next Section, that dataset
of spectra will be split in different proportions to become training and test sets. Figure <ref>(c) shows
several representative samples of h_ω spectra, corresponding to the following thicknesses combinations listed
in Table 1.
The spectra displayed in Fig. <ref>(c) show the broad variety of spectral features that can be obtained
from the studied system (from double broad peaks with very narrow resonances in between, to single narrow peaks, or
to two resonant peaks separated by a gap). This set of spectra essentially serves as a comprehensive blueprint for
the whole adversarial approach, guiding it on the characteristics and features that should be exhibited in the
synthetic data. Hence, this allows the proposed approach to capture a wider range of underlying patterns and
relationships, which in turn allows it to generate a more realistic and diverse array of synthetic spectral data.
§ RESULTS AND DISCUSSION
We proceed in this Section to report on the results obtained when applying the generative adversarial approach
summarized in Section <ref> to the specific illustrative problem described in Section <ref>.
We begin by providing a strong quantitative justification of the necessity of using a trained CWGAN for this problem,
instead of a plain CGAN (Conditional GAN —the details of the specific architecture used in each case are provided below).
Figures <ref>(a) and <ref>(b) show, respectively, the ability to reproduce the training set of a
trained CGAN and a trained CWGAN projecting the data on two dimensions via a Principal Component
Analysis (PCA) <cit.>, which retains most of the training data structure due to its >90% reproduction
rate (RR).
The PCA calculations shown in Fig. <ref> were done as follows. First, we performed a singular value
decomposition (SVD) of the covariance matrix Σ: [ U, S,
V] = S V D(Σ) with Σ = 1/mX^TX <cit.>. Here, U and V are unitary matrices,
S is the singular value matrix, m is the total number of data examples and X is the data
matrix, containing in each column one data example x. To obtain the reduced 2-dimensional (2D) representation of
the data, we calculated a reduced matrix U_reduced retaining the first 2 columns of the U matrix
obtained from the SVD, and use it as the projection matrix, x̂ = U^T_reducedX, where
x̂ is the 2-dimensional representation of the data. The reproduction rate (RR) of the whole 2D-PCA analysis
is then defined as the ratio of the first 2 singular values over all the N singular values obtained:
RR = ∑^2_i=1S_ii/∑^N_i=1S_ii
The Conditional GAN and the Conditional WGAN share the same architectural design, the specifics of which are detailed
further on. Both networks underwent identical training conditions, with the complete spectral dataset partitioned into
80% for training and 20% for validation. The key distinguishing factor lies in their respective loss functions
(L_D(G,D) for CGAN and L_C(G,C) for CWGAN). The CGAN employs Eq. (<ref>) along with a generator loss which is the sum of the first term in the r.h.s. of Eq. (<ref>) (the conditional term) and the second term in the r.h.s. of Eq. (<ref>) with opposite sign. Meanwhile, the CWGAN utilizes the loss defined in Eq. (<ref>) for the generator and Eq. (<ref>) for the critic. Notably, as depicted in Fig. <ref>(a),
the CGAN manifests evident signs of mode collapse, rendering it unable to reproduce most examples beyond the principal
cluster structures in the PCA. In contrast, the CWGAN shows the capability of replicating
most of the complexities within the training set. These results can be considered as a novel illustration, in the context
of synthetic generation of spectral data, of the key role played by Wasserstein's loss function to create
more robust generative adversarial approaches.
Figure <ref> summarizes the specific CWGAN architecture we have found as most efficient for the studied problem.
The generator (left side of Fig. <ref>) is composed of 4 hidden fully connected layers of an increasing number of neurons
in each layer (we consider 50, 100, 150, and 200 neurons for the four layers —represented as green blocks, labeled as FC1–FC4, in
Fig. <ref>). The generator model takes the condition as input (the 8 values for the thicknesses of the
layers) and returns a generated h_ω spectrum (sampled by 200 frequencies). Consistently with the results in
Refs. <cit.>, we found that we did not need to feed a random data distribution z into
the generator for operation: 20% dropout layers between all layers of the generator (i.e., connections between two
consecutive neurons are dropped with a 20% chance) provide enough variability for this task (the dropout operations
are represented as orange blocks in Fig. <ref>). On the other hand, the critic (right side of Fig. <ref>)
takes both a sample spectrum and the condition (either generated or from the training distribution) in parallel
processing lines of a single hidden layer (with 150 and 50 neurons, respectively —FC5 and FC6 in Fig. <ref>). Then, it concatenates and processes the information through two additional
hidden layers (FC7 and FC8 in Fig. <ref>, with 100 and 50 neurons, respectively) to output a single number, the score. All hidden layers feature a scaled exponential linear unit (selu)
activation function. In all models discussed in this work, as a pre-processing step, we will also calculate the
logarithm of the spectra, subtract the mean of both the input parameters and the spectra, and divide both the system
parameters and the spectra by the standard deviation (this normalization operation is represented by grey blocks
in Fig. <ref>). Finally in both the generator and the critic a final linear activation layer is added to ensure the output
has the correct size (red blocks in Fig. <ref>).
We focus now on analyzing the evolution with the training steps of the loss functions of the discriminator and the
generator (L_C(G,C) and L_G(G,C), respectively), as obtained by training the architecture displayed in
Fig. <ref> with the 6,561 h_ω spectra described in Section <ref>. Monitoring the loss
functions underlying our model during the training process is critical to understanding, diagnosing, and improving
the whole generative adversarial framework studied in this work. The obtained numerical results are summarized in
Fig. <ref> (panel (a) corresponds to L_C(G,C), while panel (b) displays the results for L_G(G,C)).
As seen in Fig. <ref>(a), L_C(G,C) initially displays a transient behavior based on a fast drop in
value followed by a sudden increase and an oscillation (around 10^3 training steps), until it reaches at
stationary value at approximately 10^4 training steps. To get additional insight into the numerical origin of this
evolution, the inset of Fig. <ref>(a) shows the dependence with the training steps of the three different
terms forming L_C(G,C), namely, 𝔼_ x∼ p_data( x)[C( x)],
𝔼_ z∼ p_z( z)[C(G( z))], and the gradient penalty, λ 𝔼_x̂∼ p_x̂[( ||∇_x̂C(x̂)||_2-1)^2
] (see Eq. <ref>). As observed, L_C(G,C) is dominated by the expected values of the score
value produced by the critic model for both the true training samples, (x), and the samples fabricated
by the generator, G( z) (black and red lines, respectively, in inset of of Fig. <ref>(a) —the green
line corresponds to the gradient penalty term). A remarkable aspect of the overall evolution of both expected value
contributions to L_C(G,C) is the fact that, despite their complicated transient behavior, they lock their difference in values after around 10^4 training steps. This in turn allows L_C(G,C) to reach a stationary value
(notice the difference in sign between both terms in Eq. <ref>), which marks the completion of the
learning process by the critic.
Figure <ref>(b) shows the results for the evolution of the generator loss function, L_G(G,C), during
the training process. The inset of Fig. <ref>(b) displays the corresponding evolution of the MAE
contribution to L_G(G,C) (i.e., the evolution of the first term in the r.h.s. of Eq. <ref>). As deduced
by comparing the values reached by the MAE with those of L_G(G,C), the generator loss function is dominated
by the expected value term, 𝔼_ z∼ p_z( z)[C(G( z))] (second term in the r.h.s.
of Eq. <ref>). As described above, for large values of the training step (≳ 10^4), this latter
term necessarily follows its counterpart term in the critic loss function (so, as also pointed out above,
their difference is maintained and a stationary value of L_C(G,C) is reached). Therefore, since we do not expect
L_G(G,C) to reach a stationary value, MAE becomes the key magnitude to monitor the quality of the learning
process of the generator (note that that this is also in accordance with the overall goal of the proposed
application: creating artificial spectral samples indistinguishable from the real ones). Indeed, as shown in
inset of Fig. <ref>(b), the MAE computed for the studied problem converges to values <0.2 for the
larger training step values considered in our calculations.
Next, we proceed to quantify our model's proficiency in reproducing the spectral heat transfer coefficient (HTC)
of the studied physical system, based on the model architecture illustrated in Fig. <ref>. For this analysis,
our focus is on two distinctive metrics, concentrating on two different aspects of the spectra. The first,
the per-point relative mean error (L_point), gauges the model's capability to accurately represent
each point in the spectrum through a relative error analysis. The second, the integral relative mean error
(L_integ), is predominantly influenced by the model's accuracy in reproducing the primary spectral
characteristics, notably the resonances. Their respective definitions are as follows:
L_point = 1/N m∑_i=1^N ∑_j=1^m y^i_j - ŷ_j^i/y^i_j ,
L_integ = 1/N∑_i=1^N ∫ (y^i - ŷ^i) dω/∫ y^i dω ,
where N is the number of spectra, m the number of points in each spectrum, y^i is a data
example and ŷ^i is the corresponding generated example by the model. Note that we undo all pre-processing
operations to perform these calculations, and that all integrals are performed over frequency points.
We start our analysis by establishing a baseline outcome for comparison using a simple FFNN, comprised of five hidden layers, each hosting 200 neurons,
characterized by a selu activation function, and a final linear activation layer. An analogous network has been previously shown
to effectively model this identical physical system given ample data <cit.> —consequently, we
anticipate that this particular network will serve as a suitable benchmark. Another important aspect is the apparent similarities between the generator and the FFNN. We aim at demonstrating that this specific simple design,
given a sufficiently large dataset, can accurately replicate the spectral characteristics inherent to the type
of systems under study, without resorting to any data augmentation technique. To accomplish this, we split the
original dataset, which consists of 6,561 h_ω spectra, allocating 80% for training set and the remaining
20% for the validation set. Our findings reveal that after 50,000 training iterations (epochs), and using the
Adam optimization algorithm with a steady learning rate of 3×10^-4, this simple neural network is
proficient in replicating the system's spectra, reaching values of L_point and L_integ
for the validation set of 3.61% and 1.45%, respectively (note that these results are comparable to those
found in Ref. <cit.>).
Then we proceed to assess the CWGAN capability as numerical engine for spectral data augmentation. To do that,
we follow a two-step approach. First, once the CWGAN has been trained, we use it to generate a number of additional
spectra to include on the training set. Second, we retrain the above-described FFNN with this new data set to create
an augmented FFNN. In this work, we generate a total of 10,000 new spectra for this data augmentation process, which are added to the training set.
This particular amount of additional spectra was chosen after observing that our results are converged when adding
to the original dataset a number of extra spectra in the range 5,000 – 10,000. Note that, once the generator has
been trained, this augmentation of the original dataset is computationally inexpensive.
Following this approach, first, we found that when using the training/validation split considered until now
(80%-20%), the FFNN and the augmented FFNN have similar performance, both in terms of per-point relative mean
accuracy and integral relative mean error (respective error values of approximately
3.6% and 1.5% are
obtained for both models). This suggests that, as anticipated, using a sufficiently large dataset minimizes the
impact of augmenting the original dataset, making any data augmentation barely noticeable. However, that conclusion
changes when a different data scenario is considered. To modify the amount of data available to the models, we
increasingly reduced the size of the training set by transferring part of it to the validation set, and retrained
both the simple and augmented FFNNs from scratch (different values of the split training/validation were
considered in the range 80%-20% – 1%-99% —note that the data augmentation is done separately for each split ratio).
Figure <ref> summarizes the main results of this analysis using the two error metrics considered in
this work (panels (a) and (b) correspond to L_integ and L_point, respectively —blue dots (lines) display the results for the simple FFNN, whereas black dots (lines) correspond to the CWGAN-augmented FFNN). As seen,
both in terms of L_integ and L_point, the simple FFNN and the augmented FFNN have similar
performance when the validation set fraction is smaller than approximately 70% (we can therefore ascribe that
interval to a sufficiently large training set scenario). However, in the case of L_integ(Fig. <ref>(a)), further increasing of the validation set fraction (i.e., further reduction of the training set size) leads to the augmented FFNN to perform increasingly better with training data reduction than the simple FFNN . As observed, values of L_integ=13.2% are achieved using a simple FFNN in
the limit case of a validation set fraction of 99%, whereas an augmented FFNN reduces that error close to a than
a half (L_integ=6.8%). The performance improvement is not so dramatic in the case of the per-point
relative mean error (Fig. <ref>(b)), but we still observe slight improvements in the low-data scenario
(for the extreme case of a validation set fraction of 99%, L_point=18.2% is obtained for the simple
FFNN, while the augmented FFNN leads to L_point=16.8%).
Overall, the numerical results discussed above suggest that CWGAN data augmentation has intrinsic value beyond simply
providing larger datasets. In particular, the reported results show that CWGAN data augmentation holds
value in creating synthetic data more adaptable and resilient to scenarios with limited data. To the best of our
knowledge, this finding is reported here for the first time. We believe it could contribute to the development
of more efficient models that require smaller datasets, through the synthetic generation capabilities offered
by generative adversarial frameworks.
Finally, to conclude the present analysis, we compare the performance as surrogate model to generate h_ω
spectra of the trained CWGAN with that of the simple and augmented FFNNs. This application arises from the conditioning
modification we introduced in this work to conventional GANs (see Eq. <ref> and the corresponding discussion
in Section <ref>). This conditioning allows us to accurately keep track of the geometrical parameters
associated to each spectra created by the generator of our CWGAN model, so once the generator is trained, it can
be decoupled from the whole architecture and used as a standalone surrogate model. The obtained numerical results are also displayed in Fig. <ref> (red points and red lines corresponds to the results of a CWGAN used
as surrogate model). As observed, for the integral relative mean error (Fig. <ref>(a)) the CWGAN
displays worse performance than both the simple and the augmented FFNNs when the validation set fraction is below
70% approximately (i.e., the data regime that we previously labeled as of sufficiently large training set scenario
for both the simple and the augmented FFNNs). However, in the low-data regime (values of the validation set
fraction larger than 70%) we observe that the CWGAN surrogate gradually improves the results of a simple FFNN,
until fully reproducing the improvement reached by an augmented FFNN in the extreme case of 99% validation set
fraction. Regarding the per-point relative mean error (Fig. <ref>(b)), the CWGAN surrogate displays a significantly worse performance in comparison to both the simple and the augmented FFNNs for the validation set fractions below 70%, but from
that point on, it starts converging to the results of an augmented FFNN in the low-data regime. Ultimately, this
comparison between the surrogate role of all considered models reinforces our previous conclusion that the CWGAN shows higher
resilience to the reduction of training data than the simple FFNN, becoming also in this context an efficient architecture in low-data
scenarios.
§ CONCLUSIONS
In this work we have studied the application of Generative Adversarial Networks (GANs) for synthetic spectral data
generation, providing a solution to the data scarcity challenge pervasive in scientific domains where acquiring
substantial spectral signals is of paramount importance. Our main focus has been an illustrative problem in the domain of near-field
radiative heat transfer involving a multilayered hyperbolic metamaterial. We have analyzed the use of a Conditional
Wasserstein GAN (CWGAN) for data augmentation and studied its influence on the predictive capacities of a feed-forward
neural network (FFNN). Our results reveal that generating spectral data effectively entails two main modifications to traditional GANs: firstly, incorporating Wasserstein GANs (WGANs) to prevent mode collapse, and secondly, conditioning these WGANs to secure accurate labels for the generated data. It is demonstrated that a basic FFNN, augmented with data yielded by a CWGAN, substantially improves its efficiency in scenarios where data is scarce, showing the inherent importance of CWGAN data augmentation beyond the simple expansion of datasets. Moreover, we present that CWGANs can function as a superior surrogate model when data is limited. Overall, this work aims at contributing to the research area of generative AI algorithms' applicability
beyond the conventional field of image generation. We believe that our findings contribute to the understanding
and applicability of generative AI algorithms in data-constrained contexts and could stimulate further
research work in the application of generative AI in a variety of scientific scenarios.
§ ACKNOWLEDGEMENTS
J.J.G.E. was supported by the Spanish Ministry of Science and Innovation through a FPU grant (FPU19/05281).
J.C.C. acknowledges funding from the Spanish Ministry of Science and Innovation (PID2020-114880GB-I00).
J.B.A. acknowledges financial support from Ministerio de Ciencia, Innovación y Universidades (RTI2018-098452-B-I00).
§ REFERENCES
|
http://arxiv.org/abs/2307.04245v1 | 20230709185117 | A Novel Pipeline for Improving Optical Character Recognition through Post-processing Using Natural Language Processing | [
"Aishik Rakshit",
"Samyak Mehta",
"Anirban Dasgupta"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
VR Job Interview Using a Gender-Swapped Avatar
Susan R. Fussell
August 12, 2023
==============================================
Optical Character Recognition (OCR) technology finds applications in digitizing books and unstructured documents, along with applications in other domains such as mobility statistics, law enforcement, traffic, security systems, etc. The state-of-the-art methods work well with the OCR with printed text on license plates, shop names, etc. However, applications such as printed textbooks and handwritten texts have limited accuracy with existing techniques. The reason may be attributed to similar-looking characters and variations in handwritten characters. Since these issues are challenging to address with OCR technologies exclusively, we propose a post-processing approach using Natural Language Processing (NLP) tools. This work presents an end-to-end pipeline that first performs OCR on the handwritten or printed text and then improves its accuracy using NLP.
OCR, NLP, Handwritten Text, Transformer, Paddle-Paddle
§ INTRODUCTION
Optical Character Recognition (OCR) is a technology for extracting texts from images containing text information <cit.>. Such images occur from photos containing text information, scanned documents, scene photos, subtitle text superimposed on an image, etc.
OCR is useful as images consume more memory space than text files. Moreover, text information is easier to copy and edit and helpful in many artificial intelligence (AI) tools, particularly for Natural Language Processing (NLP) problems.
Some general applications include self-service utility meter reading, intelligent traffic surveillance and parking system, license plate recognition, contactless check-in at private and public transportation stations, intelligent security systems, digitizing old books, etc. <cit.>. As such, OCR helps to reduce crime, increase police efficiency, and improve safety <cit.>.
The OCR methods recognize characters in the image independently by image segmentation considering only the shape and structure of the characters.
Significant research on OCR has been reported on recognizing texts from scanned documents, and number plates, with sufficient performance. Even OCR on handwritten texts in different languages has received much attention, however, with limited accuracy.
Hence, there is scope for improvement in the efficiency of OCR of handwritten text. Even the OCR of printed text is yet to be perfect. The prime challenges for inaccurate or missing text are as follows:
* variations in font style and size,
* case sensitivity,
* similar character shapes, such as `o' and `0',
* varying orientations.
These OCR mistakes negatively impact several NLP applications, including text summarizing, part-of-speech (POS) tagging, sentence boundary detection, topic modeling, named entity recognition (NER), and text classification.
The ability of NER tools to detect and identify proper nouns and classify them into the person, place, and organization categories significantly deteriorates when the error rate (ER) of OCR output rises. Post-processing OCR outputs can significantly help correct these mistakes and increase the accuracy of the outputs.
Hence, the objective is to develop an end-to-end pipeline that first performs OCR on the single-line handwritten or printed text and then improves its accuracy by post-processing the OCR output using NLP.
§.§ Prior Art
The current OCR approaches use Convolutional Neural Network (CNN)-based encoders for picture interpretation and Recurrent Neural Network (RNN)-based decoders for text generation.
The two most popular OCR models are the Transformer-based OCR (Tr-OCR) model <cit.> and the Paddle-Paddle OCR (PP-OCR) model <cit.>.
The Tr-OCR model uses the Transformer architecture for workpiece-level text generation and image understanding. TrOCR has a pre-trained image Transformer as an encoder with the decoder as a pre-trained text Transformer. This model has been trained on the IAM handwritten dataset.
The PP-OCR model consists of text detection, text recognition, and detected box rectification using a convolutional recurrent neural network (CRNN) as a text recognizer at the back end. The CRNN has convolutional layers for feature extraction followed by recurrence for sequence modeling.
These architectures produce efficient results if trained on a specific type of data. However, generalizing is difficult on unconstrained datasets due to the large variability.
In the domain of OCR output correction, the prior algorithms used mainly operate on the standard pipeline with the delete operation, followed by transposes, followed by replaces, and finally, inserts. This method used in implementing text blob's spelling correction has taken Peter Norvig's "How to Write a Spelling Corrector" <cit.> as ground truth for training. This approach is improved using Symspellpy <cit.>. The symmetric delete spelling correction algorithm lowers the complexity of edit candidate generation and dictionary lookup for a specific Damerau-Levenshtein distance. It is language-independent and is about six times faster than the traditional approach.
§ MATERIALS AND METHODS
This firstly evaluates two OCR models in this work, viz., Tr-OCR and PP-OCR, on various handwritten and printed datasets.
Thsi work then choose the better-fitting model for recognizing single-line handwritten text. A line segmentation module for segmenting a multi-line document into single lines and a classifier that classifies each of these single lines into printed or handwritten text are also implemented. The output of the OCR model is then fed to our post-processing model, which improves the accuracy of the OCR output.
The OCR output post-processing task aims to identify the sequence of words X = x_1 x_2... x_m present in the original hardcopy document given a sequence of n OCR degraded tokens Y = y_1 y_2... y_n. It should be noted that n and m are not always equal because segmentation errors could result in OCR sub-sequences that are not correct word sequences.
We divide our work into two modules. The first consists of the segmentation unit, the classification unit and the OCR model unit. The OCR models are evaluated on various real-life datasets. We then select the better-fit model as input to the second module, i.e., NLP-based post-processing. This module takes in the outputs of the OCR model and then post-processes it using NLP techniques to minimize error.
§.§ Module-A: OCR Engine
Module A consists of the first half of the pipeline which is to first perform line segmentation on a multi-line document, then classify each line into printed and handwritten text using a classifier, then perform OCR on it using a suitable OCR model.
Evaluation has been performed on two existing popular OCR models on various datasets with different fonts, handwritten dataset, dataset with occluded or background color and noise.
§.§.§ Segmentation
The aim here is to Segmenting lines in documents using A* Path planning algorithm <cit.>.
The method to achieve this is to:
* We first input a non-skewed document of either handwritten or printed text into this model and then convert the input image to 2D grayscale image.
* We use sobel filter to detect the text edges in the image. The image is convolved with two 3*3 kernels (horizontal and vertical), to calculate the image derivatives.
* We then find the horizontal projection profile of the edge detected image. HPP is calculated by the array of the sum of elements in each row. So more peaks will be seen corresponding to the rows that have text whereas the blank areas will not peak in the HPP graph.
* We then detect peaks, for which I take the threshold of one-fourth difference of maximum hpp and minimum hpp value. This helps in dividing the potential line segment regions from the text.
* We then make a cut in places where upper line text connects with the lower line text.
* We then use the A* path planning along the segmentation region and record the paths. This helps in segmenting the document into single lines.
§.§.§ classification
Convolutional neural networks (CNN) are used to classify text lines as either printed or handwritten, however it is actually the collection and preparation of the data that presents the biggest challenges. Presenting enough samples to an artificial neural network (ANN) is sufficient to achieve a decent level of accuracy for a wide range of tasks. In fact, current artificial neural networks (ANN) are already capable of handling extremely complicated data (such as ImageNet, which includes 90 different dog breeds to discriminate).
The system we created for this work is a DenseNet-121 that has been modified for the binary classification of handwritten and printed text. It is wrapped in some utility classes. A convolutional neural network called DenseNet-121 has 121 layers, the majority of which are tightly connected in 4 blocks. However, compared to designs with more parameters, it has a comparatively low number of parameters for a network of its size and so requires less training data. More information on the classifier used can be found in <cit.>.
§.§.§ Datasets
The specific datasets that we have used for the purpose are:
* Born-Digital Images Dataset <cit.>: This dataset contains images made digitally employing a desktop scanner, a camera, and screen capture software. It has 3564 images of words clipped from the actual images and a text file containing the ground truth transcription of all images provided.
* Incidental Scene Text Dataset <cit.>: This dataset consists of 4468 cut-out word images corresponding to the axis-oriented bounding boxes of the words provided and a single text file with the ground truth.
* License Plate Dataset <cit.>: This dataset has 209 cropped license plates using the original bounding boxes and has all the single characters labeled, creating a total of 2026 character bounding boxes. Every image comes with a .xml annotation file.
* Single Line Handwritten Text Dataset <cit.>: This dataset <cit.> contains images of handwritten single-line English texts whose labels are similar to the IAM dataset. There are around 400 images along with their labels.
* Bing Images of Short Quotes: This dataset contains about 215 images of short quotes with different background styles. This dataset is unlabelled as its primary purpose is to see the improvements in the outputs after post-processing using NLP.
§.§.§ Performance Metrics
The performance evaluations used are character error rate (CER) and word error rate (WER) evaluation metrics. The CER gives the fraction of the number of characters correctly identified, including spaces. The WER is the fraction of the number of words correctly output in reference to the ground truth text.
§.§ Module-B: NLP Engine
The models we consider are as follows:
§.§.§ ByT5
The Google AI team debuted T5 <cit.>, also known as a Text-To-Text Transfer Transformer, in 2020. The encoder-decoder structure of the T5 transformer model is identical to that of conventional transformer models. There are 12 pairs of blocks of encoder-decoders in it. Self-attention, a feed-forward network, and optional encoder-decoder attention are all present in each block.
The ByT5 <cit.> proposes a new model that can directly process raw text, i.e., it would be token-free. The benefits are as follows:
* They can process text in any language. Tokenizers tailored to specific languages are not necessary.
* They reduce the trouble of having complicated text preparation pipelines and are noise-resistant.
* Now that we only need 256 embeddings for a byte-level model, we no longer need a large vocabulary matrix.
§.§.§ BART
Bidirectional and Auto-Regressive Transformer <cit.>
BART is a pretraining denoising autoencoder for sequence-to-sequence models. The text is first corrupted using a random noise function, and then a model is learned to recreate the original text to train the BART model. It employs a typical Tranformer-based neural machine translation architecture that, despite its simplicity, generalizes several more modern pretraining approaches, including GPT with its left-to-right decoder and BERT (owing to the bidirectional encoder).
The dataset used to train the models in a supervised manner was generated synthetically from the OSCAR Corpus.
§.§.§ Alpaca-LORA
The Alpaca model was optimized through fine-tuning from Meta's LLaMA 7B model, which was achieved through supervised learning on a set of 52K instruction-following demonstrations generated from OpenAI's text-davinci-003. The process of generating the dataset resulted in 52K distinct instructions and corresponding outputs, and was accomplished at a cost of less than $500 by utilizing the OpenAI API. Hugging Face's training framework was used to fine-tune the LLaMA models, with techniques such as Fully Sharded Data Parallel and mixed precision training being employed. The fine-tuning process of a 7B LLaMA model was accomplished in 3 hours, using 8 80GB A100s.
We used the Alpaca model in a zero shot manner and it was run in 8-bit precision using bits and bytes.
We tried multiple prompts with the Alpaca-LORA 7B model and the one that worked the best for us was f"Fix all the errors in the sentence : text"
§.§.§ Synthetic Dataset Generation
OCR degraded text is generated for training our byT5 Transformer model using the nlpaug <cit.> library. The OCR Augmentor is used, which can be used to generate character-level errors in the text of the OSCAR <cit.> Corpus.
§.§.§ Preprocessing Inputs
To prevent any discrepancies in the lengths of the original Text and the Text Generated by the model with the ground truth, we chunk the texts into lengths of 128 words, but as subword tokenization is being used, we set the max length to 256 but replace all the padding tokens with -100 to prevent loss calculation for them.
§.§.§ Post-Processing Model Outputs
The correct spacing insertion into the output from the model is performed using the output distribution. Given a text corpus, we assume that all words are dispersed separately. The relative frequency of each term is then all that is required to know. It is logical to take that they adhere to Zipf's law <cit.>, which states that the probability of a word having rank n in a list of words is approximate 1/nlog N, where N refers to the total number of words in the corpus.
After the model is fixed, we can utilize dynamic programming to determine the spaces' locations. The sentence that maximizes the product of the probabilities of each individual word is the most likely one, and dynamic programming makes it simple to calculate. We use a cost defined as the logarithm of the probability's inverse to prevent overflows rather than utilizing the probability itself.
This has been done using the word ninja <cit.> library.
§ RESULTS
§.§ OCR model evaluation
We will first discuss the results of the two OCR systems (PP-OCR and Tr-OCR) on various datasets, as discussed above, without any postprocessing. We then proceed to show results of the segmentation and classification sub-modules.
§.§.§ Dataset 1: Born-Digital Images Dataset
The outputs of some sample images in Fig. <ref> are shown in Table <ref>.
The ultra-weight PP-OCR model, pre-trained in English and Chinese languages, resulted in a CER of 0.44.
While the Tr-OCR model was fine-tuned on the SROIE printed text dataset, it resulted in a CER of 0.3.
Hence Tr-OCR performed better than PP-OCR on this dataset.
§.§.§ Dataset 2: Incidental Scene Text Dataset
The outputs of some sample images in Fig. <ref> are shown in Table <ref>.
Using the ultra-weight PP-OCR model, which is pre-trained in English and Chinese languages, resulted in a CER of 0.65, while the Tr-OCR model fine-tuned on the SROIE dataset (printed text) resulted in a CER of 0.41.
Hence Tr-OCR performed better than PP-OCR on this dataset.
§.§.§ Dataset 3: License Plate Dataset
This dataset consists of 209 cropped license plates (as seen in Fig. <ref>) using the original bounding boxes and has all the single characters labeled, creating a total of 2026 character bounding boxes. Every image comes with a .xml annotation file.
The outputs of some sample images in Fig. <ref> are shown in Table <ref>.
Using the ultra-weight PP-OCR model pre-trained on English and Chinese languages resulted in a CER of 0.18.
While the Tr-OCR model was fine-tuned on the SROIE dataset having printed text, it resulted in a CER of 0.24.
Hence PP-OCR performed better than Tr-OCR on this dataset.
§.§.§ Dataset 4: Single Line Handwritten Text Datasett
This dataset contains handwritten single-line images (as seen in Fig. <ref> and Fig. <ref>), and it's labeled similarly to the IAM dataset. Around 400 lines of handwritten images with their labels are provided.
Using the ultra weight PP-OCR model, pre-trained on English and Chinese languages, resulted in a CER of 0.53 and a WER of 0.8.
While Tr-OCR model pre-trained on the IAM dataset of handwritten text resulted in a CER of 0.09 and WER of 0.24.
Hence Tr-OCR performed better than PP-OCR on this dataset.
The outputs of some sample images in Fig. <ref> and Fig. <ref> are shown in Table <ref> and Table <ref> respectively.
§.§ Classification
The model to classify the text into handwritten and printed text was tested on 2 datasets i.e. the Bing Images of Short quotes (discussed earlier) and a self made handwritten dataset of around 30 images.
In handwritten document dataset, it classified 30 out of 32 images correctly as handwritten text and 2 incorrectly as printed text. In printed quotes dataset, it classified 191 out of 198 images correctly as printed text and 7 incorrectly as handwritten text.
Overall the classification model has an accuracy of about 96%.
§.§ Module-A pipeline Results
A mutli-line document is first fed to the segmentation module which breaks the document down to single line texts, each line is then fed to the classification model which classifies it as handwritten or printed text. If it is handwritten text, the TrOCR model trained on handwritten text is used to perform OCR on it and if it is classified as printed text then the TrOCR model trained on printed text is used to perform OCR on it.
The OCR output for each line is then clubbed and the output corresponding to the input document is obtained.
Figure <ref> is an example of a handwritten document.
After segmenting it into individual lines we get we get Figure <ref>.
The classification model classifies each line correctly into printed text as show in Table <ref>.
We then perform OCR using TrOCR model pre-trained on handwritten text. Results obtained are shown in Table <ref>.
The CER for this example was 0.079 and WER was 0.2.
Similarly we performed this pipeline over few more examples; both printed and handwritten.
The average CER over all these examples is 0.103 and average WER is 0.274.
§.§ Results after Post Processing
The figures <ref> and <ref> are two results of our pipeline revealing the Images of One Line Quotes, the OCR Output, and the Post-Processed output.
The OCR Output for Fig.<ref> and Fig.<ref> shows how the spaces and spellings are corrected by the proposed pipeline.
From Table:<ref> and Table:<ref>, we can see that both the CER and WER on the datasets get reduced to a great extent.
§ CONCLUSION
The evaluation of the two OCR models viz. PP-OCR and TrOCR over different datasets showed that TrOCR outperforms PP-OCR in all the datasets except the License plate dataset.
A fine-tuning of the TrOCR is required on the License dataset to provide improved results, which can be considered as a future scope.
Tr-OCR can be used for OCR of printed and handwritten texts as it gives better results in both cases.
The line segmentation module works well for non-skewed documents. For skewed documents, another algorithm has to be developed for segmentation which can be considered as another future scope.
Similarly, our OCR output Post Processing Pipeline effectively reduces the errors in the OCR Degraded text. This observation can be seen in our results, where for the first synthetically generated dataset, the WER of the OCR Output came down from 0.455 to 0.045, and the CER came down from 0.124 to 0.005. Similarly, on the Kaggle Single Line Dataset, the CER decreased from 0.169 to 0.023 and WER from 0.363 to 0.135.
§ ACKNOWLEDGEMENT
The authors like to thank the funds received from IITG Startup grant (xEEESUGIITG01349ANRD001) for the research.
IEEEtran
|
http://arxiv.org/abs/2307.04688v1 | 20230710164303 | A tensorial-parallel Chebyshev method for a differential game theory problem | [
"Carmelo de Castro",
"Víctor Gatón",
"Beatriz Gómez"
] | math.NA | [
"math.NA",
"cs.NA",
"math.OC"
] |
Trapping and imaging single dysprosium atoms in optical tweezer arrays
Igor Ferrier-Barbut
August 12, 2023
======================================================================
This paper concerns the design of a multidimensional Chebyshev interpolation based method for a differential game theory problem. In continuous game theory problems, it might be difficult to find analytical solutions, so numerical methods have to be applied. As the number of players grows, this may increase computational costs due to the curse of dimensionality. To handle this, several techniques may be applied and paralellization can be employed to reduce the computational time cost. Chebyshev multidimensional interpolation allows efficient multiple evaluations simultaneously along several dimensions, so this can be employed to design a tensorial method which performs many computations at the same time. This method can also be adapted to handle parallel computation and, the combination of these techniques, greatly reduces the total computational time cost. We show how this technique can be applied in a pollution differential game. Numerical results, including error behaviour and computational time cost, comparing this technique with a spline-parallelized method are also included.
Keywords: Transboundary pollution, Differential games, Parabolic differential equations, Chebyshev multidimensional interpolation.
§ INTRODUCTION
In differential game theory (see <cit.>), several agents (or players) jointly control, through their actions, a dynamical system described by differential state equations. The actions of the agents are taken in order to maximize a particular objective function (for each player) which outcome depends on the state of the system and the actions of other players. Differential game theory is broadly employed in many areas including, for example, economics, management, engineering and operations research.
In general, it might not be easy to find explicit solutions for differential game problems, even if we restrict ourselves to a small amount of players, and numerical methods have to be employed (see <cit.> or <cit.>). If collocation methods are employed, as the number of players increases, we have to deal with the so called “curse of dimensionality”, which might boost the computational cost of the numerical methods.
Spectral methods (see <cit.>) are a class of spatial discretizations for partial differential equations with an order of convergence that depends on the regularity of the function to be approximated. Spectral methods have been successfully employed in many fields and have been proved competitive with other alternatives, both in precision and computational time cost. For example, Chebyshev interpolation has been employed in <cit.> and <cit.> to price financial derivatives. In <cit.>, a Fourier cosine method is employed to solve backward stochastic differential equations. Other examples are <cit.>, <cit.> or <cit.>. In game theory and optimal control, spectral methods have also been employed. In <cit.>, a Chebyshev pseudospectral method is employed for obtaining a numerical solution of an open-loop Nash equilibrium and in <cit.> a Spectral Galerkin method is developed.
The literature in economic and environmental problems can be divided in two categories: the papers which study the economic growth theory with spatial diffusion (for example <cit.>, <cit.> or <cit.>), and papers which deal with the spatial dimension in environmental and resource economics (for example <cit.>, <cit.> or <cit.>). Concerning transboundary pollution games specifically, <cit.> and <cit.> are seminal papers and a survey of the literature in that area can be found in <cit.>.
The differential game that we are going to employ to test our numerical method is developed in <cit.>, and it corresponds to a model which combines two aspects: first, the spatial aspect to the transboundary pollution dynamic games and second, the strategic aspects to the spatial economics, in particular to the pollution control in a spatial setting.
The paper is organized as follows. In Section <ref> we make a brief description of the differential pollution game, which can be found in <cit.>, and we present the Chebyshev interpolation based algorithm that can be employed to numerically solve the game. In Section <ref>, we describe several algorithms that allow an efficient valuation of the polynomials involved and we show how the method can be extended to handle parallelization. Section <ref> gives some numerical results, including both numerical error behaviour and a comparison of the computational cost with the spline-based method which is developed in <cit.>. Finally, Section <ref> presents some concluding remarks.
All the algorithms presented in this work have been implemented in Matlab v2020b. All the numerical experiments have been performed in a personal computer with an Intel Core processor i7-8700K of 6 cores and 12 threads, with 3,70GHz(base)/4,70GHz(turbo) and 16Gb of RAM memory.
§ A POLLUTION DIFFERENTIAL GAME
The model is a J-player non cooperative differential game. Let Ω be a planar region with a given partition in J subdomains such that
Ω=⋃^J_j=1Ω_j, Ω_i∩Ω_j=∅, i≠ j,
where Ω denotes the closure of Ω.
Let ∂_ij be the common boundary between Ω_i and Ω_j, i.e.
∂_ij=∂Ω_i∩∂Ω_j=Ω_i∩Ω_j, i≠ j.
Each player i controls just region Ω_i and he can choose the rate of pollutant emissions in that region. The objective of each player is to maximize his own payoff.
Let u_i(x,t), i=1,...,J be the emission rate of subregion i, at time t≥0 and at point x∈Ω. Function P(x,t) denotes the stock of pollution defined ∀x∈Ω.
For scalar functions f:Ω→ℝ, symbol ∇ f corresponds to the spatial gradient and, for vectorial functions f:Ω→ℝ^2, symbol ∇· f=∂ f_1/∂ x+∂ f_2/∂ y represents the divergence.
The main objective in <cit.> was to study the spatial relation between decision makers. We are going to stick at the simplest model (no wind pollution transport, no non-linear reaction terms, simplest discrete-space model version...). More complex models, which might require further numerical treatment, will be considered in future research (see Section <ref>).
The following parabolic partial differential equation gives the spatio-temporal dynamics of the stock of pollution:
∂ P/∂ t=∇· (k∇ P)-cP+F(u), x∈Ω,
P(x,0)=P_0(x), x∈Ω,
α(x)P(x,t)+k(x)∇ P^T(x,t)n=α(x)P_b(x,t), x∈∂Ω,
where u=[u_1,...,u_J]^T is the vector of emission rates, k=k(x) is a local diffusion coefficient, which is assumed to be a smooth function such that k_m≤ k(x)≤ k_M, ∀x∈Ω and 0<k_m<k_M are given constants. This coefficient measures the velocity at which the stock of pollutant is diffused in a location x. Term cP=c(x,t) represents the natural decay of pollutant.
It is assumed that only agent j emits in subregion Ω_j, j=1,...,J and that each x∈Ω belongs to just one region. Therefore, the source term can be written as:
F(u(x,t))=∑^J_j=1F_j(u_j(x,t))1_Ω_j(x),
where F_j, j=1,...,J are smooth functions and 1_Ω_j is the characteristic function of Ω_j. By the hypothesis of the model, we have that F(u(x,t))=F_j(u_j(x,t)) if x∈Ω_j.
Concerning boundary condition, α(x) is a non-negative smooth function that appears due to Newton's law of diffusion on the boundary of Ω.
The objective of player i, i=1,...,J is to maximize his payoff
J_i(u_1,...,u_J,P_0)=∫_0^+∞∫_Ω_ie^-ρ tG_i(u_1,...,u_J,P)dxdt,
subject to the dynamics given by (<ref>). Parameter ρ>0 is a given time-discount rate. The instant welfare G is given by a benefit from consumption minus the damage caused by the stock of pollutants.
Each region i produces one consumption good, where the amount of production is controlled by player i, and such production produces emissions (pollution). Therefore, we can represent
G_i(u_1,...,u_J,P)=(B_i(u_i)-D_i(P))1_Ω_i
where B_i(u_i) corresponds to the instantaneous benefits from production and D_i(P) to the environmental damage caused by the accumulated stock of pollution. B_i and D_i are assumed to be smooth functions and respectively concave and convex in their arguments.
Now we proceed to describe the discrete-space version of the model. We only sketch the main ideas and we refer to Appendix B,<cit.> for the details.
Functions u_i, P_i are considered densities of emissions and pollution stocks along region Ω_i. We define
p_i(t)=1/m_i∫_Ω_iP(x,t)dx, v_i(t)=1/m_i∫_Ω_iu_i(x,t)dx, i=1,...,J
where m_i=∫_Ω_idx.
Under a linear-cuadratic specification and an infinite-time horizon
F_i(v_1,...,v_J):=β_iv_i, G_i(v_1,...,v_J,p):=v_i(A_i-v_i/2)-φ_i/2p_i^2,
p=[p_1,...,p_J]^T, v_i=v_i(p), m_i=m_j, ∀ i,j=1,...,J,
and some calculus, the objective of player i is to maximize
J_i(v_1,...,v_J,p_0)=∫_0^+∞e^-ρ t(v_i(A_i-v_i/2)-φ_i/2p_i^2)dt,
subject to the dynamics of the aggregated stock of pollution given by the set of ordinary differential equations
m_idp_i/dt=∑^J_j=0
j≠ ik_ij(p_i-p_j)-m_ic_ip_i+m_iF(v_i), i=1,...,J
supplemented with a given initial state of pollution p^0=[p_1^0,...,p_j^0]^T.
§.§ A Chebyshev-based numerical method
Let h>0 be a positive parameter, t_n=nh the discrete times defined for all positive integers n and δ_h=1-ρ h the discrete discount factor.
We denote by u̅_i, i=1,...,J a sequence of real numbers u̅_i={u_i,n}_n=0^∞ and 𝒰 denotes the set of real sequences v̅ with v_n≥0, ∀ n∈ℕ.
For p=[p_1,...,p_J]^T∈ℝ^J and u=[u_1,...,u_J]^T∈ℝ^J, u_i≥0, i=1,...,J, we define
g_i(p,u)=∑^J_j=0
j≠ ik_ij/m_i(p_i-p_j)-c_ip_i+F(u_i), i=1,...,J
and we denote g(p,u)=[g_1(p,u),...,g_J(p,u)]^T.
In the time-discrete infinite horizon game, each player i=1,...,J wants to maximize
W_i(u̅_i,p_0)=h∑^∞_n=1δ^n_hG_i(u_i,n,p_i,n), u̅_i∈𝒰,
subject to
p_n+1=p_n+hg(p_n,u_n), n≥ 0
where p_n=[p_1,n,...,p_J,n]^T, u_n=[u_1,n,...,u_J,n]^T and p_0 is a given initial state.
The time-discrete value function V_h,i(p), i=1,...,J is obtained solving Bellman's equation
V_h,i(p)=u_i≥0max{hG_i(p_i,u_i)+δ_hV_h,i(p+hg(p,[u_i,u^*_-i]))}
where for i=1,...,J
u^*_i=u_i≥ 0argmax{hG_i(p_i,u_i)+δ_hV_h,i(p+hg(p,[u_i,u^*_-i]))}
and where, from now on, we employ the notation
[u_i,v_-i]=[v_1,...,v_i-1,u_i,v_i+1,...,v_J]^T, u_i∈ℝ, v∈ℝ^J.
We now present the main steps of a generalized collocation Chebyshev-based method. A review of Chebyshev interpolation and an implementation is presented in Section 3.
Step 0: Offline Computation
We define N_p=(N^p_1,...,N^p_J)∈ℕ^J and N_u=(N^u_1,...,N^u_J)∈ℕ^J, two J-dimensional vectors such that N^p_i, N^u_i>0, i=1,...,J.
With these J-dimensional vectors, we build two adecuate sets of collocation points P⊂ℝ^J, U⊂ℝ^J (detailed in Section <ref>).
Let N_P=|P| and P={p̅_j∈ℝ^J, j=1,...,N_P}.
For each player i=1,...,J, we compute a Chebyshev interpolation polynomial in the control variables for every collocation node in the state variables, i.e. we compute
g^i_p̅_j(u), j=1,...,N_P,
which are N_P different interpolation polynomials in u, such that ∀ j=1,...,N_P it holds
g^i_p̅_j(u̅) =g_i(p̅_j,[u̅_i,u̅_-i]), ∀u̅∈U
We denote g_p̅_j(u)=[g^1_p̅_j(u),g^2_p̅_j(u),...,g^J_p̅_j(u)], j=1,2,...,N_P.
We compute some localization indexes (detailed in Section <ref>).
We set r=0 and a small time step h∈ℝ^+.
For each player i=1,...,J, we initialize the iteration with some given V^N_p,[0]_h,i(p̅_j) and u^[0](p̅_j), j=1,2,...,N_P.
For each player i=1,...,J, we compute the Chebyshev interpolation polynomial V^N_p,[0]_h,i(p) which interpolates V^N_p,[0]_h,i(p̅_j), j=1,2,...,N_P.
Step 1:
For each player i=1,...,J and each p̅_j, j=1,2,...,N_P, we compute the J-dimensional and one variable polynomial
𝒢^i_p̅_j(u)=.g_p̅_j(u)|_u^[r]_-i(p̅_j), j=1,2,...,N_P
Step 2:
For each player i=1,...,J and each p̅_j, j=1,2,...,N_P, we compute the one variable polynomial
𝒱^N_p,[r]_h,i,p̅_j(u)=V^N_p,[r]_h,i(p̅_j+h𝒢^i_p̅_j(u)).
Step 3:
For each player i=1,...,J, we find the strategy at each state node p̅_j, j=1,2,...,N_P which maximizes the objective function, i.e.
u^[r+1]_i(p̅_j)=u≥ 0argmax{𝒱^N_p,[r]_h,i,p̅_j(u)}.
Step 4:
For each player i=1,...,J, we define V^N_p,[r+1]_h,i(p) as the Chebyshev interpolation polynomial which interpolates 𝒱^N_p,[r]_h,i,p̅_j(u^[r+1]_i(p̅_j)), j=1,2,...,N_P.
If we are not below the prescribed tolerance,
|V^N_p,[r+1]_h,i(p)-V^N_p,[r]_h,i(p)|<TOL, i=1,...,J
we set r=r+1 and return to Step 1. Otherwise, we stop.
We point out that, in the particular pollution problem we are dealing with, g^i_p̅_j(u)=g^i_p̅_j(u_i), i=1,...,J, ∀p̅_j∈P is one dimensional, but we prefer to present a generalized algorithm in the case it was not.
§ THE CHEBYSHEV INTERPOLATION
We now make first a brief review of multidimensional Chebyshev interpolation and comment how the different calculus involved in the previous algorithm can be efficiently performed.
We are going to employ the work presented in Section 2,<cit.>, where it is described how multidimensional Chebyshev polynomials can be efficiently computed, storaged and evaluated for several values in all the dimensions simultaneously.
Here, we only include the main definitions in <cit.> and the modifications needed to adapt the algorithm to the problem described in Section <ref>.
§.§ A review of multidimensional Chebyshev interpolation
The Chebyshev polynomial of degree n (see <cit.>) is given by
T_n(x)=cos(n arccos (x) ),
where 0≤arccos (x) ≤π.
From now on, variable x∈[-1,1] or x=(x_1,...,x_n)∈[-1,1]^n for the n-dimensional case.
Let N∈ℕ. The N+1 Chebyshev nodes {α^k}_k=0^N in interval [-1, 1] correspond to the extrema of T_n(x) and they are given by:
α^k=cos(π k/N), k=0,1,...,N.
If the function F(x̃) that we want to interpolate is defined in interval x̃∈[a,b], the Chebyshev nodes {α̃^k}_k=0^N in interval [a,b] are computed with the {α^k}_k=0^N nodes in [-1,1] and the change of variable given by formula
x̃=b-a/2x+b+a/2, x∈[-1,1].
Let F(x̃) be a continuous function defined in x̃∈ [a,b].
For N∈ℕ, let I_N F(x) be the N degree interpolant of function F(x̃) at the Chebyshev nodes, i.e. the polynomial which satisfies
I_N F(α^k)=F(α̃^k), k=0,1,...,N.
Polynomial I_N F(x) can be expressed as
I_N F(x)=∑^N_l=0p̂_lT_l(x), x∈[-1, 1],
where coefficients p̂_l are given by
p̂_l =1/N∑^N_k=0^”F(α̃^k)T_l(α^k), if l∈{0,N},
p̂_l =2/N∑^N_k=0^”F(α̃^k)T_l(α^k), if l∈{1,2,...,N-1},
and the double prime indicates that we halve the first and last elements.
Instead of using formula (<ref>), we will employ an efficient FFT based algorithm which is presented in <cit.> or <cit.>. For the univariate case
Algorithm C1v:
1. Define
z=[F(α̃^0),F(α̃^1),...,F(α̃^N-1),F(α̃^N),F(α̃^N-1),...,F(α̃^1)]^T
2. Compute
y=real(FFT(z))/2N
3. It holds that
{p̂_0 =y(1),
p̂_l =y(l+1)+y(2N-(l-1)), if 0<l<N,
p̂_N =y(N)
.
We also mention the algorithm presented in <cit.> which allows to compute efficiently the derivative of a Chebyshev interpolation polynomial.
If F(x̃) is a continuous function defined in x̃∈ [a,b] and
I_NF(x)=∑_l=0^Np̂_lT_l(x), x∈[-1,1]
is its Chebyshev interpolation polynomial, it holds that
(I_NF(x))'=2/b-a∑_l=0^N-1q̂_lT_l(x)
where for l=0,1,...,N-1:
q̂_l=2/c_l+∑_ j=l+1
j+l odd ^Njp̂_j, where c_l={2, l=0,
1, l≥1..
Now we proceed to multidimensional interpolation.
Let x̃=(x̃_1,x̃_2,...,x̃_n ) and F̃(x̃) be a continuous function defined in x̃_j∈[a_j, b_j], j=1,2,...,n.
For N={N_1,N_2,...,N_n}∈ℕ^n, we define
L^N={l=(l_1,l_2,...,l_n) / 0≤ l_j≤ N_j, l_j∈ℕ, j=1,2,...,n}.
For j=1,2,...,n, let {α^k_j}_k=0^N_j be the N_j+1 Chebyshev nodes in [-1, 1] and {α̃^k_j}_k=0^N_j the corresponding N_j+1 Chebyshev nodes in [a_j, b_j].
We use the notation α̃^l=(α̃^l_1_1,α̃^l_2_2,...,α̃^l_n_n ) and α^l= (α^l_1_1, α^l_2_2,...,α^l_n_n ).
Let I_N F(x) be the n-dimensional interpolant of function F(x̃) at the Chebyshev nodes, i.e. the polynomial which satisfies
I_N F(α^l)=F(α̃^l), l∈ L^N.
Polynomial I_N F(x) can be expressed as
I_N F(x)=∑_l∈ L^Np̂_lT^l(x), x∈[-1, 1]^n,
where
T^l(x)=T_l_1(x_1)T_l_2(x_2) ... T_l_n(x_n).
and the coefficients p̂_l=p̂_(l_1,l_2,...,l_n)∈ℝ can be computed with the n-dimensional version of the Algorithm C1v presented before.
Algorithm Cnv:
Let Γ_(N_1+1)×...×(N_n+1) be a n-dimensional array such that
Γ(l_1+1,l_2+1,...,l_n+1)=F(α̃^l_1_1,α̃^l_2_2,...,α̃^l_n_n)
1. A_1=Γ.
2. For i=1 to n
2.1. {m_1,m_2, ...,m_n}=(B_i).
2.2. For j_2=1 to m_2, for j_3=1 to m_3, ..., for j_n=1 to m_n
B_i(:,j_2,j_3,...,j_n)=Algorithm C1v(A_i(:,j_2,j_3,...,j_n)).
2.3. A_i+1=permute(B_i),[2:n 1]).
3. p̂_l=A_n+1(l_1+1,l_2+1,...,l_n+1).
We remark that the FFT routine in Matlab admits multidimensional evaluation, so step 2.2 can be efficiently computed without loops.
Therefore, the polynomial coefficients are stored in a (N_1+1)×...×(N_n+1)-dimensional array A, where
A(l_1+1,l_2+1,...,l_n+1)=p̂_(l_1,l_2,...,l_n)
§.§ Evaluation of one N_u-dimensional polynomial
Suppose now that we have a Chebyshev interpolation polynomial I_N_u g(u), given by a (N^u_1+1)×...×(N^u_n+1)-dimensional array A and we want to evaluate it in a set of points {b^1_j}_j=1^k_1 just in the first variable, i.e. we want to compute
{I_N F(b^1_j,u_2,u_3,...,u_n)}_j=1^k_1.
In (Section 2,<cit.>) it is described how {(T_l_1(b^1_1),...,T_l_1(b^1_k_1))}_l_1=0^N_1 can be efficiently evaluated and stored in a (k_1,N_1+1)-dimensional array B such that
B(j,l)=T_l(b^1_j)
Afterwards, a standard matrix product has to be performed over all the other dimensions. We need to compute
B· A(:,i_2,...,i_n), i_s=1,...,N_s+1, s=2,...,N.
In the last version of Matlab, this can be efficiently performed with “pagemtimes” function. We can define
C=permute(pagemtimes(B,A),[2:N_n 1])
where the result is a (N_2+1)×...×(N_n+1)× k_1 dimensional array. The permutation is needed in order to evaluate further dimensions.
Array C corresponds to the coefficients of the interpolation polynomial I_N F(x) evaluated in the points {b^1_j}_j=1^k_1, i.e.
C(:,...,:,j)∼ I_N F(b^1_j,u_2,...,u_n), j=1,...,k_1.
If we want now to evaluate the polynomial in a set of points {b^2_j}_j=1^k_2 in the second variable, another set of points in the third variable..., we would proceed iteratively obtaining, at the end, a (k_1,...,k_n)-dimensional array D which contains the evaluation of the polynomial in every possible combination of the points of each variable, i.e.
D(j_1,j_2,...,j_n)=I_N F(b^1_j_1,b^2_j_2,...,b^n_j_n), j_s=1,...,k_s, s=1,...,n
§.§ Evaluation of N_P different N_u-dimensional polynomials in different points
Suppose that we have N_P different multidimensional Chebyshev interpolation polynomials, where each one is given with a N_u=(N^u_1+1,...,N^u_J+1)-dimensional array A_j, j=1,...,N_P as shown in Subsection <ref>.
They can all be stored in a (N^u_1+1,...,N^u_J+1,N_P)-dimensional array A_j where
A(:,...,:,j)=A_j∼ I_N_u g_j(u), j=1,...,N_P
and g_j(u), j=1,...,N_P is each of the functions that has been interpolated.
In order the employ the algorithm of Subsection <ref> efficiently in our pollution problem, a small modification has to be done.
Suppose that we want to evaluate each polynomial in a different point in the first variable, i.e., given {b^1_j}_j=1^N_P we have to compute
{I_N_u g_j(b^1_j,u_2,u_3,...,u_n)}_j=1^N_P,
We remark that in Subsection <ref> we wanted to evaluate (in the first variable) one polynomial in a set of k_1 different points. Here we want to evaluate each polynomial g_j(b^1_j,u_2,u_3,...,u_n) in a specific point b_j, j=1,...,N_P.
We build a 2-dimensional array B as defined in Subsection <ref> such that B(j,l)=T_l(b^1_j), and we define the following location index
aux1=[1:N_P:N_P(N^u_2+1)...(N^u_J+1)]
locind_1=aux1
for l=2:N_P
aux2=N_P(N^u_2+1)...(N^u_J+1)(l-1)+(l-1)
locind_1=[locind_1 (aux1+aux2)]
end
The evaluation
C=permute(pagemtimes(B,A),[2:J 1])
D=reshape(C(locind_1),[N^u_2 N^u_3 ... N^u_J N_P])
gives a (N^u_2+1,N^u_3+1,...,N^u_n+1,N_P)-dimensional array D where
D(:,...,:,j)∼ I_N g_j(b^1_j,u_2,...,u_J), i=1,...,J
In a similar way, a location index locind_2 can be computed to compute {I_N_u g_j(b^1_j,b^2_j,u_3...,u_J)}_j=1^N_P, i=1,...,J for a second set of points {b^2_j}_j=1^N_P. And so on for evaluating the rest of the dimensions.
§.§ Implementation of the algorithm
Step 0: Offline computations
Suppose that the J players are indexed by i=1,...,J.
Let N_p=(N^p_1,...,N^p_J)∈ℕ^J and N_u=(N^u_1,...,N^u_J)∈ℕ^J be two J-dimensional vectors such that N^p_i, N^u_i>0, i=1,...,J.
Vectors N_p and N_u will be respectively employed to define the discretization in the state space and in the control space.
Let us introduce two positive parameter P_M,U_M>0 big enough and consider intervals [0,P_M] and [0,U_M]. For each player i, the Chebyshev nodes {p̃^i_j}_j=0^N^p_i and {ũ^i_j}_j=0^N^u_i are given by
p̃^i_j =1/2[cos(π j/N^p_i)(P_M-0)+(P_M+0)], j=0,1,...,N^p_i,
ũ^i_j =1/2[cos(π j/N^u_i)(U_M-0)+(U_M+0)], j=0,1,...,N^u_i,
We consider the J-intervals
Ĩ_p =[0,P_M]×...×[0,P_M]⊂ℝ^J
Ĩ_u =[0,U_M]×...×[0,U_M]⊂ℝ^J
where we will numerically solve the pollution game. We define the sets of collocation points
P̃={(p̃^1_j_1,p̃^2_j_2,...,p̃^J_j_n), j_i=0,1,...,N^p_i, i=1,...,J}
Ũ={(ũ^1_j_1,ũ^2_j_2,...,ũ^J_j_n), j_i=0,1,...,N^u_i, i=1,...,J}
For simplicity in the notation we believe that, prior to initialize the algorithm, it is better to perform the corresponding changes of variables to [-1,1] (as seen in Subsection <ref>).
Therefore, we will work directly with the J-intervals I_p=I_u=[-1,1]^J and the corresponding sets of chebyshev collocation points
P={(p^1_j_1,p^2_j_2,...,p^J_j_n), j_i=0,1,...,N^p_i, i=1,...,J}
U={(u^1_j_1,u^2_j_2,...,u^J_j_n), j_i=0,1,...,N^u_i, i=1,...,J}
defined in [-1,1]^J. Once the algorithm is finished, we move back to the original intervals Ĩ_p and Ĩ_u.
Therefore N_P=|P|=∏^J_i=1(N^p_i+1) and P={p̅_j, j=1,...,N_P}.
For any player i∈{1,...,J}, we need to compute N_P different interpolation polynomials of {g^i_p̅(u), p̅∈P} such that ∀p̅∈P, it holds
g^i_p̅(u̅) =g_i(p̅,[u̅_i,u̅_-i]), ∀u̅∈U
We remark that these polynomials have to be computed just once and this can be efficiently done with Algorithm Cnv as seen in Subsection <ref>. The polynomials will be (N^u_1+1,...,N^u_J+1)-dimensional and, for the rest of the algorithm, we identify for any player i∈{1,...,J}
g^i_p̅_j(u) ∼{I_N_u g^i_j(u)}_j=1^N_P, j=1,...,N_P.
In the iterative algorithm, at any iteration r and for any player i∈{1,...,J}, we will need to evaluate these polynomials in
{I_N_u g^i_j(u^[r]_1(p̅_j),u^[r]_2(p̅_j),...,u^[r]_i-1(p̅_j),u^i_k,u^[r]_i+1(p̅_j),...,u^[r]_J(p̅_j))}_k=0^N^u_i, j=1,...,N_P
where we recall that {u^i_k, k=0,...,N^u_i} are the control Chebyshev nodes of player i.
Therefore, we can build a set of location indexes locind_j, j=1,...,J which allow to perform such computation efficiently as shown in Subsection <ref>.
We remark that this location indexes have to be computed just once and can be employed in any iteration [r] of the algorithm.
We initialize with some given V^N_p,[0]_h,i(p̅) and u^[0](p̅_j), p̅∈P.
For each player i=1,...,J, we compute the Chebyshev interpolation polynomial V^N_p,[0]_h,i(p), which interpolates V^N_p,[0]_h,i(p̅), p̅∈P̅ with Algorithm CnV.
Step 1 and Step 2:
For every player i∈{1,..,J} we compute {g^i_p̅_j(u^i_k,u^[r]_-i(p̅_j))}_k=0^N^u_i, i.e.
{I_N_u g^i_j(u^[r]_1(p̅_j),u^[r]_2(p̅_j),...,u^[r]_i-1(p̅_j),u^i_k,u^[r]_i+1(p̅_j),...,u^[r]_J(p̅_j))}_k=0^N^u_i, j=1,...,N_P
with the technique described in Subsection <ref> and the location indexes precomputed in Step 0.
We define
{𝒢^i_p̅_j(u^i_k)}_k=0^N^u_i={g_p̅_j(u^i_k,u^[r]_-i(p̅_j))}_k=0^N^u_i, j=1,2,...,N_P
where we recall g_p̅_j(u)=[g^1_p̅_j(u),g^2_p̅_j(u),...,g^J_p̅_j(u)], j=1,2,...,N_P.
We point out that, in practice, it is not necessary to build the interpolation polynomial of {𝒢^i_p̅_j(u^i_k)}_k=0^N^u_i. For every p̅∈P, in order to build 𝒱^N_p,[r]_h,i_0,p̅(u) we just compute
V^N_p,[r]_h,i_0(p̅+h𝒢^i_p̅(u^i_0_k)), k=0,1,...,N^u_i_0
and then apply Algorithm C1v to the results obtained.
We want to remark that, working with arrays, all the operations can be implemented simultaneously for every p̅∈P.
Step 4:
For any player i_0∈{1,..,J}, in order to compute
u^[r+1]_i(p̅)=u≥ 0argmax{𝒱^N_p,[r]_h,i,p̅(u)}, p̅∈P.
we recommend to employ Newton algorithm for two reasons.
It is straightforward to implement Newton algorithm for all p̅∈P at the same time and the derivative of a Chebyshev interpolation polynomial can be efficiently obtained employing the algorithm presented in Subsection <ref>.
§.§ Parallelization
Since the evaluation over the N_P different state nodes is independent, the multidimensional arrays involved in the numerical algorithm described in Subsection <ref> can be split in smaller packages to different cores (computer processing units).
In our case, let N_b and N_f be two natural numbers such that N_fN_b=N_P. For any array A(:,...,:,1...N_P), employing reshape function, we can redefine the array
A=reshape(A,[N_1,...,N_J,N_f,N_b])
For k=1,...,N_b, we define A'_k(:,...,:,1...N_f):=A(:,...,:,1...N_f,k).
The calculus involved in the numerical algorithm, for example the computation of {g^i_p̅_j(u^i_k,u^[r]_-i(p̅_j))}_k=0^N^u_i in Step 1, can be done independently in different cores employing Matlab parfor and arrays A'_k(:,...,:,1...N_f), k=1,...,N_b. The information can be reassembled when needed.
The precomputation of localization indexes has also to be adapted to the smaller arrays that we have just defined, but this is something straightforward to do.
This parallelization procedure can also be applied working with just one core. If array A(:,...,:,1...N_P) is very big, it can be splitted in smaller arrays as we have just described and solved with a standard for loop.
The optimal (computing time) values for N_b and N_f depend on the values of N_p and N_u, but probably they also depend on the number of cores and the kind of processors of the computer employed.
For example, with the computer that we employed in our experiments, we run a 3 players game with N^p_i=7 (N_P=512). We computed the computational time cost of the numerical solution for smaller arrays given by N_b=1,2,...,2^9. The results are represented in Figure <ref>.
This experiment shows that it was neither optimal to compute each state node in a different core (fully parallelization) nor to compute all the nodes at the same time in just one core (without parallelization and fully tensorized). The optimal computational time cost was “half way” between the size of the arrays involved and the number of blocks (which depends on the size of the arrays). Similar results were obtained when the game was played with different amounts of players.
§ NUMERICAL RESULTS
We now repeat some of the numerical experiments performed in <cit.>. We compare the spline method employed in that paper with the Chebyshev method that we have described.
When the pollution game is played by 2 players we have explicit solutions, so an error vs computational time cost analysis can be performed. For the case of 3 or more players, we lack of an explicit solution. We have obtained the same qualitative solutions as in <cit.>, but just a comparison of the computational time cost has been done.
Concerning the parallelization procedure, once we have the number of state nodes N_P, let {M_1,...,M_σ_0(N_P)} be all the natural dividers of N_P.
For each numerical experiment, all the possible combinations for N_f=M_i and N_b=M_j such that N_fN_b=N_Ψ have been tested. We point out that for all the experiments,
* Case N_f=N_P, N_b=1 (without parallelization and fully tensorized) is suboptimal.
* Case N_f=1, N_b=N_P (fully parallel) is suboptimal.
The optimal computational time cost is always attained at some value N_f=M_i, M_i≠{1,N_P}.
§.§ 2 players
We repeat Example 1 in <cit.>. Let
β_i=1, φ_i=1, A_i=0.5, c_i=0.5, i=1,2, K=[k_ij]=[ -1 1
1 -1]
The spatial configuration described by K means that players 1 and 2 share a common boundary and are isolated from outside.
We have computed the numerical solution for
* h∈{10^-2,10^-3,10^-4,10^-5},
* TOL∈{10^-2,10^-3,10^-4,10^-5,10^-6},
* N_i^p∈{2,4,8}, i=1,2.
Under the spatial configuration defined, both players are symmetric, therefore the solutions of both players must coincide. In Figure <ref> we represent the emission (left) and pollution (right) time paths obtained with the Chebyshev numerical method.
In order to analyse the performance, we study the numerical solution for the different values of N^p_i, TOL and h.
For the 2 players case, we have explicit solutions (see <cit.>), so we can compute the exact optimal policy u(x). For each experiment, we define the mean square error of the numerical solution by
error=1/N_Ψ√(∑_x∈Ψ(u^*(x)-u(x))^2)
where u^* is the numerical optimal policy obtained at the last iteration of the method in each experiment.
With the errors computed for all the experiments, we can plot the numerical error vs the computational time cost of each experiment and then retain the lower convex envolvent of the resulting cloud of points.
The lower convex envolvent informs, for a desired error tolerance, the minimum time required to attain that error. The analysis is represented in Figure <ref>, for the spline(blue) and Chebyshev(red) methods.
The results in Figure <ref> show that the Chebyshev method is much more efficient that the spline method. In average, for 2 players and a similar prescribed error tolerance, the Chebyshev method requires 1/271 of the time of the spline method. The nodes of the lower convex with the biggest errors (the two situated at the right side) correspond to N^p_i=2, the next node to N^p_i=4 and the node with the smallest error (left side) corresponds to N^p_i=8.
It is interesting that both methods present the same error behaviour (the slopes of the lower convex envolvents are similar), since Chebyshev interpolation usually has a better error convergence than spline interpolation. This is probably due to the fact that the objective function has a linear-cuadratic specification and, therefore, both methods have similar error behaviour. It is possible that with non-polynomial objective specifications Chebyshev method could also present a better behaviour.
§.§ 3 players
We now repeat Example 3 in <cit.>. The parameter values remain the same as in the previous experiment and the spatial configuration is given by
K=[k_ij]=[ -1 1 0
1 -2 0
0 1 -1]
This configuration means that Player 2 shares a boundary with both Players 1 and 3, Players 1 and 3 have no common boundary and all the countries are isolated from outside. Under this configuration, Players 1 and 3 are symmetric, so their strategies should coincide.
In Figure <ref> we represent the emission (left) and pollution (right) time paths obtained with the Chebyshev numerical method. As expected, the optimal strategies and the pollution stocks of Players 1 and 3 coincide.
Unfortunately, for 3 or more players we lack of an explicit solution. Nevertheless, we point out that, for the same values of h, TOL and N^p_i, Chebyshev method outperforms the spline method in computational time cost.
In Figure <ref> we represent for the spline(blue) and Chebyshev(red) methods, the total number of spatial nodes (N^p_i+1)^3 vs the computational time cost for N^p=3,5,7, h=10^-3, TOL=10^-4. Other values for h and TOL were also tested, and the chosen ones are the fastest for the spline method.
For the same parameter values, the Chebyshev method requires, in average, 1/146 of the time of the spline method in order to obtain a numerical solution. This is not a complete performance analysis, since we lack of the explicit solutions, and we can not measure the numerical error. But point out that the results in the experiment for 2 players, and the fact that the qualitative solutions obtained with both methods are very similar, strongly suggest that the Chebyshev method outperforms the spline method.
§.§ 4 Players
We now repeat Example 4 in <cit.>. The parameter values remain the same as in the previous experiment and the spatial configuration is given by
K=[k_ij]=[ -1 1 0 0
1 -3 1 1
0 1 -2 1
0 1 1 -2 ]
This configuration means that Player 1 shares a frontier with Player 2, Player 2 shares a frontier with Players 1, 3, 4 and Player 3 shares a boundary with players 2 and 4. All the countries are isolated from outside. Under this configuration, Players 3 and 4 are “symmetric” since they share the same amount of frontiers with other countries and, therefore, their strategies should coincide.
In Figure <ref> we represent the emission (left) and pollution (right) time paths obtained with the Chebyshev numerical method. As expected, the optimal strategies and the pollution stock of Players 3 and 4 coincide.
Concerning numerical performance, the results are similar to the result in the experiment for 3 players. For the same values of h, TOL and N^p_i, Chebyshev method outperforms in computational time cost the spline method.
In Figure <ref> we represent, for the spline(blue) and Chebyshev(red) methods, the total number of spatial nodes (N^p_i+1)^4 vs the computational time cost for N^p=3,5,7, h=10^-3, TOL=10^-4.
The Chebyshev method requires, in average, 1/100 of the time of the spline method in order to obtain a similar numerical solution.
As before, in the parallelization procedure, the optimal computational time cost is attained for a value N_b such that 1<N_b<8^4=N_P.
Finally, we would like to point out that other experiments in <cit.>, including different spatial specifications and/or that one of the regions is not isolated from outside, have also been carried out. For not overloading the paper we have not included the results, but they have been similar to the ones presented in this work.
§ CONCLUSIONS
We have presented a tensorial-parallel Chebyshev collocation method for a game theory problem, which has a fairly good computational cost behaviour. This is due to the fact that it combines parallezation with some algorithms that allow, employing tensorization, to evaluate multidimensional Chebyshev polynomials efficently.
We should mention that the localization indexes presented (see Subsection <ref>) are not unique. Other dimension orders could be considered.
In this paper, we have presented the main ideas of a Chebyshev based algorithm which can be adapted to other differential game problems. These techniques may help to improve the numerical computation of problems which are affected by the known “curse of dimensionality”, which appears when collocation methods are applied to problems with multiple dimensions.
Future work will be oriented in two different paths.
On one hand, in <cit.>, a Chebyshev based reduced function basis interpolation method is also presented. That technique allows to obtain the same numerical error with much less computational effort that a direct interpolation, as the one that we have employed in this work. Since the “curse of dimensionality” is still present, for a bigger number of players and number of state nodes, it would be interesting to adapt the reduced basis method to this problem.
On the other hand, we would like to adapt and test the algorithm to more complex model specifications. For example, it could be considered that each region i can be divided in n subregions, where player i controls the emissions in each of the different subregions. Incorporate wind and a nonlinear reaction term in the pollution dynamics is also interesting since, although it is a model more computationally challenging, it is also closer to reality.
§.§ Funding
This research was supported by Junta de Castilla y León cofinanced by FSE-YEI (first author) and Junta de Castilla y León by project VA169P20 cofinanced by FEDER funds (second author).
§.§ Acknowledgments
The authors thank Javier de Frutos and Guiomar Martín-Herrán for stimulating discussion.
99
Ba Başar T., Zaccour G. (eds.), Handbook of Dynamic Game Theory, Springer, (2018).
Brito Brito P., The Dynamics of Growth and Distribution in a Spatially Heterogenous World, WP13/2004/DE/UECE, Technical University of Lisbon, 2004.
Brock Brock W., Xepapadeas A., Yannacopoulos A.N., Optimal control in space and time and the management of environmental resources, Annu. Rev. Resour. Econ. 6 (2014), 33-68.
Camacho Camacho C., Zou, B., Briani, M., On the dynamics of capital accumulation across space. Eur. J. Oper. Res.,186 (2008), 451-465.
Camacho2 Camacho C., Pérez-Barahona A., Land use dynamics and the environment, J. Econ. Dyn. Control 52 (2015), 96-118.
Canuto Canuto C., Hussaini M.Y., Quarteroni A., Zang T.A., Spectral methods. Fundamentals in Single Domains, Springer, Berlin, 2006.
deFrutos2 de Frutos J., Gatón V., Chebyshev reduced basis function applied to option valuation, Computational Management Science, 14(2017), 465-491.
deFrutos3 de Frutos J., Gatón V., A pseudospectral method for option pricing with transaction costs under exponential utility, Journal of Computational and Applied Mathematics, 294 (2021), 113541.
deFrutos1 de Frutos J., Marín-Herrán G., Spatial effects and strategic behaviour in a multiregional transboundary pollution dynamic game, Journal of Enviromental Economics and Management, 97 (2019), 182-207.
Dockner Dockner E.J., Long N.V., International pollution control: cooperative versus noncooperative strategies, J. Environ. Econ. Manag. 25 (1993), 13-29.
Fabbri Fabbri G., Ecological barriers and convergence: a note on geometry in spatial growth models. J. Econ. Theory, 162 (2016), 114-136.
Gab Gaß M., Glau K., Mahlstedt M., Mair M., Chebyshev interpolation for parametric option pricing, Finance and Stochastics, 22 (2018), 701-731.
Falcone Falcone M., Numerical methods for differential games based on partial differential equations, International Game Theory Review, Vol. 8, N.2 (2006), 231-272.
Jor Jørgensen S., Martín-Herrán G., Zaccour, G., Dynamic Games in the Economics and Management of Pollution. Environ. Model. Assess. 15 (2010), 433-467.
Johnson Johnson P.A., Numerical Solution methods for differential game problems (MS Thesis), Massachusetts Institute of Technology, 2009.
Nikoo Nikooeinejad Z., Dekavakhalafi A., Heydari M., A numerical solution of open-loop Nash equilibrium in nonlinear differential games based on Chebyshev pseudospectral method, Journal of Computational and Applied Mathematics, 300 (2016), 369-384.
Ortiz Ortiz-Gracia L. and Oosterlee C. W., A highly efficient Shannon wavelet inverse Fourier technique for pricing European options, SIAM Journal on Scientific Computing, 38 (2016), No. 1, B118-B143.
Rivlin Rivlin T.J., Chebyshev Polynomials: From Approximation Theory to Algebra and Number Theory, Wiley, New York, (1990) MR1060735(92a:41016)
Ruijter Ruijter M. J. and Oosterlee C. W., A Fourier Cosine Method for an Efficient Computation of Solutions to BSDEs, SIAM Journal on Scientific Computing, 37 (2015), No. 2, A859-A889.
Ruijter2 Ruijter M. J., Versteegh M. and Oosterlee C. W., On the application of spectral filters in a Fourier option pricing technique, Journal of Computational Finance, 19 (2015), No. 1, 75-106.
Van Van der Ploeg F., De Zeeuw A.J., International aspects of pollution control, Environ. Resour. Econ. 2 (1992), 117-139.
Xepa Xepapadeas, A., The spatial dimension in environmental and resource economics. Environ. Dev. Econ. 15 (2010), 747-758.
Zhangb Zhang B. and Oosterlee C. W., Pricing of early-exercise Asian options under Lévy processes based on Fouirer cosine expansions, Appl. Numer. Math. 78 (2014), 14-30.
Zhang Zhang L., Zhou Z., Spectral Galerkin approximation of optimal control problem governed by Riesz fractional differential equation, Appl. Numer. Math. 143 (2019), 247-262.
|
http://arxiv.org/abs/2307.03884v1 | 20230708031428 | Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization | [
"Dheeraj Peddireddy",
"Utkarsh Priyam",
"Vaneet Aggarwal"
] | quant-ph | [
"quant-ph",
"cs.LG"
] |
APS/123-QED
Purdue University, West Lafayette IN 47906
{dpeddire, upriyam, vaneet}@purdue.edu
Variational Quantum algorithms, especially Quantum Approximate Optimization and Variational Quantum Eigensolver (VQE) have established their potential to provide computational advantage in the realm of combinatorial optimization. However, these algorithms suffer from classically intractable gradients limiting the scalability. This work addresses the scalability challenge for VQE by proposing a classical gradient computation method which utilizes the parameter shift rule but computes the expected values from the circuits using a tensor ring approximation. The parametrized gates from the circuit transform the tensor ring by contracting the matrix along the free edges of the tensor ring. While the single qubit gates do not alter the ring structure, the state transformations from the two qubit rotations are evaluated by truncating the singular values thereby preserving the structure of the tensor ring and reducing the computational complexity. This variation of the Matrix product state approximation grows linearly in number of qubits and the number of two qubit gates as opposed to the exponential growth in the classical simulations, allowing for a faster evaluation of the gradients on classical simulators.
Noisy Tensor Ring approximation for computing gradients of Variational Quantum Eigensolver for Combinatorial Optimization
Dheeraj Pedireddy, Utkarsh Priyam, and Vaneet Aggarwal
==========================================================================================================================
§ INTRODUCTION
Quantum computing has been far touted for its potential to solve some complex problems much more efficiently than the classical computers <cit.>. Although the fruition of the idea is further into the future, researchers have been exploring the real-time applicability of the current generation quantum computers. Most of the quantum processors in their current state are severely limited in the number of qubits, noise levels and inefficient error mitigation techniques, calling for a class of algorithms robust to the noise and error. Variational Quantum Algorithms (VQA) have been studied widely for their resilience to the noise from decoherence making them an ideal choice of algorithms for various applications on gate-based Noisy Intermediate Scale Quantum (NISQ) devices. Two such algorithms of prominence, Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) evaluate the expected energy of a state resulting from a short parameterized circuit (frequently referred to as ansatz) with respect to an observable defined by a given problem. A classical outer-loop optimizer tries to find the optimal circuit parameters that minimize the expected energy. While QAOA implements a fixed ansatz inspired from adiabatic quantum computing, VQE utilizes a variable ansatz offering flexibility to engineer the ansatz based on the hardware constraints and the problem at hand. This work chooses to focus on VQE, inspired by the recent advances of variable ansatz in quantum machine learning <cit.>. VQE, initially developed by Peruzzo et al. <cit.>, has seen a number of applications in condensed matter physics <cit.>, quantum chemistry <cit.> and quantum mechanics <cit.>.
Optimization is one of the frontrunners among the applications being studied for potential quantum advantage from VQE and adjacent algorithms <cit.>. Combinatorial optimization is a class of problems of practical relevance with applications spanning across transportation, logistics, manufacturing etc. Studies have indicated that the exponentially growing state space and quantum entanglement can improve the chances of finding the right solution with a potential speedup <cit.>. Even minor improvements to optimization problems from quantum algorithms can potentially have a large impact on the society. In the context of VQE, a multi-qubit Hamiltonian is prepared with its ground state encoding the solution of the optimization problem and the algorithm optimizes their parameters to minimize the energy of the Hamiltonian. The algorithm has been extended to use filtering operators <cit.> and iterative approaches <cit.>, to improve the performance with combinatorial optimization. The approach has also been validated on several practical applications using optimization (e.g., Job Shop Scheduling <cit.>, Vehicle Routing <cit.>)
Despite promising prospects, VQAs and more broadly quantum circuits are hindered by a plethora of problems in the current era of quantum computing, with the primary forces of impedance being the limited number of qubits, physical cost of implementation of quantum circuits and decoherence noise. Hybrid algorithms also suffer from the asymmetric scaling of quantum and classical resources with the circuit execution scaling linearly in number of qubits and circuit depth and the classical gradient evaluation scaling exponentially. Note that the gradients of the variational parameters in VQAs were evaluated using either automatic or numeric differentiation until Schuld et al. <cit.> formalized the notion for gradients computed on quantum hardware popularized as the parameter shift rule. This method estimates the gradients by computing the energy of the wave functions generated by identical circuits with the parameter for which the gradient is to be estimated, shifted by certain values. Parameter shift rule alleviates the imbalance in the scalability, albeit at the cost of executing a much larger number of quantum circuits than the other methods. Given the inconsistency in evaluating the expected values from circuits due to decoherence and inefficient error mitigation techniques on top of the statistical noise from measurement, a larger number of circuits can lead to inaccurate results.
In order to address the issues of scalability, accuracy and cost of execution, this manuscript proposes a classically simulated quantum circuit execution method that approximates the initial and intermediate quantum states using a low-rank tensor ring (TR) to compute the expected energy, which in turn are used in approximating the gradients of a VQE. Built upon the Matrix Product State (MPS) approximation of many body quantum states <cit.>, the tensor ring VQE (TR-VQE) formulates a combinatorial optimization in the same way as a naive VQE, using parameter shift rule to compute the gradients. However, the expected values of the shifted circuits used to compute the gradients are evaluated by approximating the initial quantum state with a TR as opposed to MPS, where the single qubit and two qubit gates corresponding to the circuit ansatz are evaluted using tensor contractions. It must be noted that while a single qubit gate does not change the structure of the tensor network, a two qubit gate contracted with the two corresponding tensors can alter the network by increasing the tensor size or its rank. The proposed method retains the tensor ring structure and rank by truncated singular value decomposition of the higher order tensor resulting from the application of two-qubit gate. The consistent low-rank structure allows for an exponential speedup with respect to the number of qubits and circuit depth, compared to the MPS approximation and the brute force approximation with full state vector. This truncation however, induces a noise in the circuit executions similar to the decoherence in actual quantum computers. Therefore, classically simulating a noisy quantum computer instead of a perfect quantum computer only scales linearly in the number of qubits and circuit depth <cit.>. MPS representation tries to simulate ideal quantum computation without noise but literature suggests that the noise in the current generation quantum computers limits the amount of entanglement that can be built into a quantum state. Given the computational cost of simulating ideal quantum computers, this may not be an ideal prospect since they are not representative of the noisy quantum computations. Moreover, given the robustness of VQAs to noise, this kind of noisy simulation with the benefits of scalability can be specifically useful for machine learning and optimization. Furthermore, Liu et al. <cit.> highlights that the presence of noise in VQAs can naturally help the optimizer avoid saddle points. We posit that this advantage extends to the TR-VQE as well due to the induced noise. The proposed method is validated on multiple instances of max-cut problem compared against F-VQE <cit.> and a naive VQE using parameter shift rule. The expected values of the circuit for the benchmarks are computed using simulations implementing a non-noisy MPS approximation highlighting the improved performance of noisy TR approximation over MPS approximation.
The rest of the manuscript is organized as follows: Section <ref> recounts the existing literature related to the use of Tensor networks in approximating quantum circuits and applications in QML. Section <ref> formulates the notion of VQE to solve maximum cut problem introduced in Section <ref>. Section <ref> discusses the proposed method used to compute the gradients of a variational quantum circuit using the TR approximation of a quantum state and Section <ref> addresses the complexity analysis of the proposed method. The numerical simulations are explained in Section <ref> followed by a discussion on limitations and future direction in Section <ref>
§.§ Related Work
Since its inception, the tensor network approach has been much more widely explored in the context of classical simulation of quantum computations, compared to the brute-force statevector simulation
or other graphical and distributed methods <cit.>. Matrix Product states especially were widely regarded for their ability to efficiently represent moderately entangled quantum many body states <cit.>. The idea has been further extended to techniques that efficiently simulate quantum circuits <cit.> by contracting tensor networks at a fraction of cost of the statevector simulation which holds the full 2^N sized vector. Building upon the literature several variations have emerged for specific cases like Projected Entangled Pair States (PEPS) for two-dimensional circuits <cit.> and Tree Tensor networks (TTN) for circuits with tree-like connectivity <cit.> and Multi-scale Entanglement Renormalization Ansatz (MERA) <cit.> etc.
Note that the naive MPS-based circuit simulation (which will be referred to as non-noisy MPS approximation in this manuscript) as formulated in <cit.> and widely implemented across quantum computing platforms like Qiskit, do not efficiently encode circular entanglement from first to last qubits. Further, any application of two-qubit gate contractions result in increasing tensor size which in turn increases the computational complexity as the number of two-qubit gates in the circuit increases. To circumvent this shortcoming, Zhou et al. <cit.> proposed a truncated MPS approximation to simulate noisy quantum computers, which demonstrates a linear complexity in number of qubits and circuit depth.
The noisy simulation addresses the issue of increasing tensor size by approximating the larger tensor after the application of a two qubit gate with tensors of smaller size. The higher order tensor is decomposed into two lower order tensors by truncated singular value decomposition. This approximation preserves the tensor sizes after the application of each gate unlike in the previous iterations of MPS-based simulation.
A number of quantum-inspired tensor network methods have been explored in the machine learning literature for supervised learning. Huggins et al. <cit.> implements MPS and Tree Tensor Network models to solve binary classification. Other tensor network based methods using PEPS and MPS were demonstrated to be effective in image classification tasks <cit.>. The aforementioned literature mostly explores quantum-inspired classical machine learning techniques but very few works have probed into the utility of tensor networks in augmenting quantum machine learning techniques. Peddireddy et al. <cit.> extends the singular value thresholding method from Zhou et al. <cit.> to tensor rings implemented with variational quantum classifiers demonstrating the scalability and improved performance over non-noisy MPS approximation. Tensor rings also encode circular entanglement more efficiently than MPS due to the ring structure. While Zhou et al. <cit.> evaluates the approximated expectations using noisy MPS representation, they do not explore the notion of extending it to computing gradients of variational circuits. Therefore, the application of noisy circuit simulation to scale the classical optimization loop of VQE is still an open problem. Furthermore, extending this approximation method from MPS to tensor rings can also improve representability. This work builds up on <cit.> and <cit.> by adapting the noisy tensor ring representation to compute the approximate gradients of the parameters of a variational quantum eigensolver using the parameter-shift rule. Although the proposed TR based representation computes less accurate gradients than non-noisy MPS based representations owing to the additional information that is removed in the form of truncated singular values, TR based approach scales much more efficiently.
§ PROBLEM SETUP
§.§ Max-Cut Optimization Problem
This section will briefly introduce the maximum cut (max-cut) problem and its mathematical notion in the context of quantum computers. Max-Cut is an NP-hard binary optimization problem with a history of applications in statistical physics, VLSI design and clustering etc. Given an undirected graph G = (V,E), with V and E representing the nodes and edges of the graph, the problem aims to maximize the summed weights of the edges that are cut by grouping the nodes of the graph into two subsets by choosing the optimal subgroups.
The mathematical definition follows the QUBO formulation <cit.>: a graph of n nodes with the weights of the edges given by w_ij for (i,j) ∈ E. The nodes of the graph are cut into two subgroups labelled +1 and -1. The problem attempts to maximize the objective function C(x) given by the sum of the weights of the edges connecting the nodes in +1 to the nodes in -1 which assumes the form:
C(x) = ∑_i,j w_ij x_i (1 - x_j)
where x ∈{0, 1}^n and (i,j) ∈ E. The bitstring x corresponds to an instance of the grouping schema where x_i = 0 or 1 represents the i-th node being assigned to the subgroup +1 or -1 respectively. In order to find the solution to the given objective function with a quantum computer, we construct an Ising Hamiltonian <cit.> corresponding to the function by substituting x_i with its matrix transformation I- Z_i/2 where Z_i are the Pauli Z operators that act on qubit i and I is the identity matrix:
C(x) = ∑_i,j1/4 w_i,j (I - Z_i) (I + Z_j)
C(x) = 1/2∑_i<j w_ij - 1/2∑_i<jZ_i Z_j
Essentially, maximizing the objective of the given optimization problem is equivalent to minimizing the energy of Ising Hamiltonian given by:
ℋ = ∑_i,j w_i,j Z_i Z_j
whose ground state corresponds to the solution of the optimization.The full Hamiltonian ℋ∈ℂ^2^n is never constructed explicitly but is represented using a combination of the Pauli Z operators.
§.§ Variational Quantum Eigensolver
VQE is one of the algorithms that utilizes parameterized quantum circuits to solve for an approximate solution of combinatorial optimization problems. Unlike QAOA, VQE does not enforce any constraints on the circuit ansatz and therefore can be altered to suit the hardware that it's being implemented on. The optimization problem is first translated to a qubit Hamiltonian ℋ whose eigenvalues correspond to the costs of various solutions with the ground state being associated with the optimal solution of the problem. A quantum circuit with parameterized unitary rotations denoted by U(θ) is applied to an initial state |ψ_0⟩ (generally chosen to be the basis state |0⟩^⊗ n) resulting in a trial wavefunction.
|ψ(θ)⟩ = U(θ)|ψ_0⟩
Here, U(θ) represents a chosen ansatz U with variational parameters given by θ. The energy landscape of the Hamiltonian can be traversed using this wavefunction to estimate the expected energy. We choose the notation H(θ) to represent the expectation value of |ψ(θ)⟩ with respect to the observable Hamiltonian ℋ.
H(θ) = ⟨ψ(θ)|ℋ|ψ(θ)⟩
The algorithm then updates the variational parameters of the circuit employing an outer loop optimizer using gradient descent or other adjacent methods. The process is repeated until we arrive at a sufficiently low energy. The quality of the solution at the t-th iteration is evaluated using the approximation ratio which is defined as follows:
α = M-H(θ_t)/M-m
where M represents the maximum possible Hamiltonian value and m the minimum. In other words, α=1 represents the optimal solution, and α=0 represents making no cuts.
Most of the variational quantum algorithms including VQE are implemented as hybrid models that compute the expected value of the observable on a quantum computer while calculating gradients and updating the weights on a classical computer. The fundamental mechanics of the VQE algorithm is illustrated in Figure <ref>. Following the parameter shift rule <cit.>, when the variational parameters are components of a single qubit rotation gate, the gradient takes the following form:
H(θ)θ^i = 1/2[H(θ + π21_i) - H(θ - π21_i)]
Given the choice of ansatz, we choose a circuit that only comprises CX (CNOT) gates and single qubit rotation gates which form a universal gate set, thus simplifying the gradients to the closed form given in Equation <ref> where θ^i is the i-th element of θ, H(θ) corresponds to the energy of the Hamiltonian ℋ with respect to the wavefunction generated by the circuit U(θ) and 1_i is a one-hot vector with the i-th value as 1.
§ METHODOLOGY
§.§ Computing gradients using Tensor Rings
Since the gradients of VQE can be computed by implementing quantum circuits, it is crucial to be able to carry out the circuits efficiently. Although the parameter-shift method is faster than the automatic differentiation, it requires a quantum processor to run three identical ansatz with different parameters numerous times to arrive at the gradients (More discussion on this is provided in section <ref>). This could present an impediment given the limited availability of quantum computers and the cost of each implementation. Therefore, it is essential to study the utility of classical simulation of quantum circuits in assisting the optimization procedure.
Tensor networks have been shown to be effective in approximating quantum many body systems and are thus a strong contender among the methods for efficiently simulating quantum circuits. A tensor network can be easily understood via Penrose diagrams or Tensor Network diagrams where each diagram corresponds to a graph of multiple nodes with each node representing a tensor. A tensor is a multidimensional array of with its order denoting the number of its dimensions or edges. A popular approximation strategy for quantum systems involve Matrix Product States(MPS) or Tensor Trains (TT), a class of tensor networks that aim to represent a higher order tensor as a chain of order-3 tensors (See Figure <ref>). This representation has the advantage of the topological similarity with a multi-qubit system where each tensor corresponds to a single qubit and the contraction between the tensors encodes the entanglement between the qubits. However, TTs are limited in their flexibility and representation ability due to the constraint on their border rank. Since the border ranks are much lower than the inner ranks, this representation may not be optimal for some specific quantum systems. Also, an optimal TT representation greatly depends on the order of the products restricting the choice of ansatz. Note that the border rank constraints present the same hindrances in the application of TTs to classical datasets as well. In order to ameliorate these issues, researchers in the area of classical machine learning have adopted Tensor Rings(TR) to represent the data <cit.>. TR structures relaxes the rank constraints on the border tensors increasing the expressibility of the tensors. TR decomposition multiplies the tensors circularly therefore removing the variance to permutations of the multiplicative order. Notable advantages of TR representation with respect to quantum states involves flexibility in the choice of the ansatz. To explain this further, let us assume a circuit similar to the one shown in Figure <ref> where entanglement was introduced between the first and the last qubits using a CX between the said qubits. TR representations are a better fit to encode this kind of cyclic entanglement, therefore improving the choice set of ansatz for the problem.
A quantum state |ψ⟩∈ℂ^2^N can be approximated by a tensor ring with N tensors (corresponding to N qubits) circularly multiplied with each tensor denoted by τ(n).
|ψ⟩ = ∑_i_1 … i_N∑_r_1 … r_Nτ(1)_r_N r_1^i_1τ(2)_r_1 r_2^i_2…τ(N)_r_N r_1^i_N|i_1 i_2 … i_N⟩
Here, free indices i_n ∈{0, 1} span the 2^N dimensional Hilbert space corresponding to the quantum state whereas r_n represent the bond indices (indices connecting the tensors) with rank χ_n, which determines the quality of the approximation with entangled states i.e., higher values of χ_n are better able to represent strongly entangled states. The rank of the given tensor representation for |ψ⟩ is denoted by (χ_1, χ_2, … , χ_N). Throughout the manuscript we choose χ_n = χ for all n, reducing the number of hyperparameters. The choice of χ, hereafter referred to as the tensor ring bond, for a specific problem significantly determines the representation ability and therefore performance of the algorithm. Each tensor in the the proposed TR representation is a third order tensor with a dimension of χ×χ× 2. The exponential reduction in storage complexity can be observed where a quantum state is represented by 2^N parameters, its TR approximation can be represented using only 2Nχ^2 parameters. The approximation for a typical initialization for VQAs i.e., |0⟩^⊗N can be easily computed to be a tensor ring with each tensor of dimension χ×χ× 2 where the value of the tensor is 1 at the index (1,1,1) and 0 elsewhere, represented by 1_(1,1,1). However, if a different initialization is to be chosen, constructing an approximation may not be as straightforward but efficient algorithms for TR decomposition have been studied at length in <cit.>.
While a TR can represent a quantum state, it would also need to be transformed by parameterized rotations in order to function as specified in VQAs. Given the assumption of utilizing only single qubit gates and CX gates in order to simplify the parameter shift rule, it would be sufficient to study the transformations of the TR corresponding to the aforementioned gate set. Unitary transformations of single qubits are represented by a (2 × 2) matrix which is a 2nd order tensor. The matrix multiplication associated can be implemented by contracting the unitary tensor along the free edge of the tensor corresponding to a qubit as specified in the following equation:
τ'(n)_r_n-1 r_n^i'_n = ∑_i_nU_i'_n i_nτ(n)_r_n-1 r_n^i_n
U_i'_n i_n is the 2nd order tensor with indices i'_n and i_n corresponding to the unitary matrix acting on n-th qubit which is contracted along the edge i_n with the n-th tensor denoted by τ(n) spanning the indices r_n-1, r_n and i_n, resulting in the new tensor τ'(n)_r_n-1 r_n. Note that the transformation associated with a single qubit rotation (visually illustrated in Fig <ref>) does not alter the structure of the tensor ring preserving the storage complexity.
Two qubit rotations like CX however, can change the tensor ring structure increasing the storage complexity. In order to alleviate this, we use truncated singular value decomposition with the enlarged tensor to break it down to two tensors of the original smaller size. Say a two qubit gate U ∈ℝ^4×4 is to be applied to the adjacent qubits m and n (including the circular entanglement). We begin by contracting the two tensors τ(m)_r_m-1 r_m^i_m and τ(n)_r_n-1 r_n^i_n along their shared index r_m = r_n-1 to compute a new tensor:
M_r_m-1 r_n^i_m i_n = ∑_r_mτ(m)_r_m-1 r_m^i_mτ(n)_r_n-1 r_n^i_n
The two qubit gate U is then reshaped into the tensor U_i'_m i'_n i_m i_n and multiplied with the tensor M_r_m-1 r_n^i_m i_n along the shared edges:
(τ')_r_m-1 r_n^i'_m i'_n = ∑_i_m i_n U_i'_m i'_n i_m i_n M_r_m-1 r_n^i_m i_n
The resultant tensor is reshaped into a matrix of shape (i'_m × r_m-1) × (i'_n× r_n) whose singular value decomposition is performed as follows:
(τ')_i'_m × r_m-1^i'_n× r_n = ∑_r_m X_r_m-1 r_m^i'_m S_r_m Y_r_n-1 r_n^i'_n
where the orthogonal vectors of τ' populate the matrices X and Y whereas S_r_m is a diagonal matrix with the singular values. Since we assume a constant TR bond r_m = χ and we know the dimensionality of i to be 2 ( the free indices span the quantum state), in this case, τ' has 2χ singular values. S_r_m is truncated resulting in a new diagonal matrix S'_r_m with only the largest χ values remaining. We also truncate X and Y accordingly to keep only the orthogonal vectors corresponding to the remaining singular values. We compute products of the matrices X,Y and S as follows to make up the new tensors at the sites m and n of the tensor ring. Note that while this method can only work with two qubit gates acting on adjacent qubits,this can be extended to a generic circuit using SWAP gates.
τ'(m)_r_m-1 r_m^i'_m = X_r_m-1 r_m^i'_m S'_r_m
τ'(n)_r_n-1 r_n^i'_n = Y_r_n-1 r_n^i'_n
Following the procedure specified, the resulting tensor ring would culminate with the same structure and dimensionality as before the procedure, preserving the storage complexity after each application of a two qubit rotation. It is to be noted, the specified operations at worst scale at O(χ^3), and without this approximation, the dimensionality of the tensor network approximation scales exponentially in the number of two-qubit rotations or the depth of the circuit, therefore increasing the computational complexity. Different stages of the two qubit rotation procedure with a TR is demonstrated in Figure <ref>.
Given that an ansatz has been chosen for a variational algorithm (assuming the conditions of only constructing a circuit with parameterized single qubit gates and CX gates), it can be represented as a set of gates denoted by U, ordered by their position in the circuit i.e. a gate that is applied first to the quantum gate is placed at the beginning of the set, with the single qubit gates parameterized by θ_t. The final quantum state produced by the circuit can be approximated by a tensor ring that is initialized as 1_(1,1,1) and transformed with each gate in U as specified in the procedure in the preceding paragraphs. In order to compute the expected energy with respect to the final quantum state, it must be decomposed into its linear sum of the expected energy of the unitary components of the Hamiltonian composed of Pauli matrices.
⟨ψ(θ)|ℋ|ψ(θ)⟩ = ∑_i,j w_i,j⟨ψ(θ)|Z_iZ_j|ψ(θ)⟩
We propose to compute the expected energy with respect to a component Z_pZ_q using the TR representation by the application of single qubit Pauli Z gate at sites p and q and contracting it with the ring before the Z transformations along the edges that span the quantum Hilbert space (See Fig <ref>).
τ'(θ)_i_1…,i'_p,…,i'_q,… i_N = ∑_i_p, i_q Z_i_p^i'_p Z_i_q^i'_qτ(θ)_i_1…,i_p,…,i_q,… i_N
⟨ψ(θ)|Z_p Z_q|ψ(θ_t)⟩ = ∑_i_1,i_2,… i_Nτ'(θ)_i_1, i_2 … i_Nτ(θ)_i_1, i_2 … i_N
In the equations above, τ(θ) represents the final state produced by the ansatz U parameterized by θ approximated by a TR and τ'(θ) is produced after the Pauli Z transformations on the final state. Note that the indices i'_p and i'_q in τ'(θ) have been renamed to i_p and i_q for a simplified representation. When computing the expected value, the order of the contractions becomes crucial to the computational complexity but it has been established <cit.> that it can be computed effectively in O(Nχ^3) steps. The total procedure to compute the expected value has been presented in a more compact form in Algorithm <ref>. We utilize this algorithm to evaluate the gradients of the variational quantum eigensolver by computing the expected energy of the two circuits with shifted parameters as shown in Algorithm <ref>. The gradients are then used to update the weights of the variational parameters in the same manner as the naive VQE.
§.§ Complexity
In terms of memory, we note that we construct and manipulate only a tensor ring with N tensors corresponding to N qubits which grows at the scale of O(Nχ^2) as opposed to the O(2^N) for the full quantum state. Zhou et al. <cit.> establishes that the tensor network bond χ can be chosen to be sufficiently low in order to simulate a noisy quantum computer at a linear computational complexity in the number of qubits N and circuit depth D (defined as the number of repeating parametrized blocks). Parameter shift rule, popularized for its ability to compute the gradients on a quantum computer, evaluates the gradients by computing the expectations with shifted weights.However, computing the expected values with an additive error ϵ requires a many-fold implementation of the same circuit generally in the order of O(1/ϵ^2) which adds to the statistical noise. The proposed method can compute each gradient classically with a single iteration of two circuits each of which scales as O(NDχ^3) with an error rate controlled by χ. The error rate introduced by the truncation decreases with an increasing bond dimension χ and generally saturates at a finite value in the order of 10^-2 per two qubit gate for circuits with large N and D. This is in contrast to the error rate on a quantum computer characterized by the fidelity per two qubit gate which exponentially decays in the overall number of gates in the circuit <cit.>. The finite fidelity per gate allows us to scale the proposed algorithm in circuit depth and qubits for larger applications. Automatic differentiation (AD), a tool prevalent in classical machine learning literature and applications, grows at least as fast as the forward pass of the network in terms of computational complexity. This indicates that classically computing the gradients of VQE by AD scales exponentially as it would for classically computing the energy expectation of a circuit. It must be noted that the proposed method of tensor ring transformations can be used with AD as well, which again provides an exponential speedup in N and D.
§ EXPERIMENTS
To demonstrate the runtime performance and accuracy of the TR-VQE presented in Algorithm <ref>, we compare several instances of training TR-VQE for MaxCut problem with Filtering VQE (F-VQE) <cit.> and naive VQE implemented on the Qiskit framework (MPS-VQE). Both the benchmarks use a non-noisy MPS representation to simulate the quantum computations from the circuit as formulated in <cit.> and the F-VQE is additionally implemented with an identity filter to equate the number of parameters in all the experiments. A sampling noise is introduced in the implementation of MPS-VQE and F-VQE to compute the expected values from the circuit. As discussed before, MPS-VQE is expected to compute more accurate gradients than TR-VQE owing to the induced noise in the proposed TR representation. Therefore MPS-VQE converges faster, however takes longer runtimes per iteration because the tensor sizes in MPS-VQE increase with circuit depth. F-VQE additionally implements filtering operators to change the optimization landscape thereby improving the training convergence. Amaro et al. <cit.> claims that the inclusion of filtering operators leads to a faster and more reliable convergence to the optimal solution. This improvement, however, is dwarfed with larger circuits with more number of qubits (Readers can refer to <cit.> for additional details on the implementation of F-VQE). We further collected data on TR-VQE to analyze how internal configurations, namely bond rank, and graph size, i.e., number of qubits affect the performance relative to filtering and naive VQE. All of the graphs used were randomly generated with two to three edges per node, and uniformly distributed weights (between 1 to 10) and edge pairs. We use the same circuit ansatz for all experiments, with an initial parameterized layer of R_y gates on all qubits and a variational block repeated D times, where D represents the circuit depth. Each variational block contains a set of circular CX or CNOT gates followed by parameterized R_y gates on all qubits followed by another set of CX and R_y gates. The circuit depth and the tensor ring rank is set to 1 and 10 respectively for all experiments, unless otherwise specified.
Figure <ref> indicates how each of the three algorithms performs in terms of iteration runtime across randomly generated graphs of varying sizes and different circuit ansatz. The results for each algorithm were averaged across 10 initializations each with multiple unique MaxCut graphs of fixed size. For MPS-VQE and F-VQE, the number of shots used in the Hamiltonian evaluation was increased quadratically in graph size. Across varying graph sizes, TR-VQE’s per-iteration runtime, computed as the time taken for computing the expected value of the Hamiltonian and updating the parameters from the evaluated gradients, is faster than both filtering and non-filtering VQE with smaller graphs and by extension, smaller number of qubits. As illustrated in Figure <ref>, the iteration runtimes of TR-VQE consistently improve by a large margin over the benchmarks when the number of qubits are increased. Figure <ref> demonstrates the iteration runtime of each algorithm with increasing circuit depths for a graph with 10 nodes. TR-VQE again shows a significant improvement in runtime compared to MPS-VQE and F-VQE with increasing number of layers. The results from both the experiments are compatible with the theoretical claims of improved runtime complexity as discussed in Section <ref>. The runtime speedup can be attributed to the consistent rank and tensor sizes irrespective of the circuit depth whereas in the naive MPS based approach, the tensor sizes increase with the circuit depth.
On the other hand, TRVQE performs with near-equivalent accuracy to the other algorithms, despite the runtime speedup. Figure <ref> displays per-iteration accuracy for the algorithms, averaging data from 10 runs on various randomly generated graphs with a fixed size of 10 nodes. The accuracy was compared using the approximation ratio at each iteration computed as defined in equation <ref>.
The resulting data from Figure <ref> indicate that TR-VQE performs similar to F-VQE in terms of accuracy, diverging on average by no more than 3% at any point during training. When extended to variable graph sizes, TR-VQE once again performs on par or better than the alternative algorithms. The data in Table <ref> was collected using a TR-VQE bond rank of 10 and 1000 shots per circuit evaluation for MPS-VQE and F-VQE. Excluding an outlier at small graph sizes due to instability, MPS-VQE performed the most accurately due to the availability of more information, albeit at the cost of larger runtime. However, TR-VQE followed closely behind, with a large but inconsistent gap in accuracy between it and the least accurate F-VQE algorithm.
We also plot the approximation ratio of TR-VQE with varying TR bond rank and it is to be noted that TR-VQE performs almost as good as MPS-VQE at ranks as low as 12, indicating that an exponential speedup can be achieved at smaller ranks, improving the storage complexity. All experiments including the benchmarks see a wide variance in terms of accuracy with larger graph sizes due to a phenomenon called the barren plateau effect <cit.> which is informally defined as the impaired performance due to the exponential flattening of loss landscape in the number of qubits. Martin et al. <cit.> demonstrate that barren plateau effect persists in quantum MPS circuits and therefore we can surmise that Tensor ring circuits, as an extension of MPS, will face a similar challenge in training.
To assess the accuracy of approximate gradients, we employ the l^2-norm to compare gradients obtained from state vector simulations and those generated using the TR-VQE method. The mean gradient distance, computed as the average norm difference across 500 randomly selected points on the optimization landscape, is used as a metric. We compare this metric with values obtained from noisy simulations that emulate the gradients on an actual quantum computer using noise models from the ibm montreal machine. We examine the mean gradient distance for various circuit depths and graph sizes.
Figure <ref>(Left) illustrates that the gradients produced by the TR-VQE method closely resemble those obtained from exact state vector simulations, with almost negligible differences. In contrast, gradients derived from quantum simulation deviate significantly from the exact gradients, a trend that becomes more pronounced as the number of qubits increases, as expected. As shown in Figure <ref>(Middle), TR-VQE's effectiveness diminishes with higher circuit depths due to the cumulative impact of two-qubit gates. However, this performance decline can be mitigated by increasing the tensor rank, as demonstrated in Figure <ref>(Right). In conclusion, gradients computed from approximate classical simulations can achieve accuracy comparable to those obtained from quantum computers. Consequently, they can be a valuable addition to the optimization process in hybrid algorithms.
§ CONCLUSION
This work proposes a novel technique for combinatorial optimization problems with Variational Quantum Eigensolvers by approximating the circuit computations with noisy tensor ring contractions. The proposed algorithm uses parameter shift rule to evaluate the gradients used to update the variational parameters, but computes the expected values of the shifted circuits using tensor ring approximation. The computational complexity of circuit evaluation grows linearly in the number of qubits and the circuit depth which offers a quadratic speedup over the perfect classical simulation. Evaluating gradients using TR-VQE can also eliminate the additive error present in circuit computations on quantum computers. We validate the algorithm by implementations on several instances of Max-Cut problem and compare with algorithms that use the full state information. The results demonstrate the vast improvement in runtime with respect to the number of qubits and circuit depth validating the complexity analysis at a minor cost of accuracy.
§ COMMONLY USED GATES
The matrix representation of some of the commonly used gates in the manuscript are listed below:
R_x(θ) =
[ cos(θ/2) -isin(θ/2); -isin(θ/2) cos(θ/2) ],
R_y(θ) =
[ cos(θ/2) -sin(θ/2); sin(θ/2) cos(θ/2) ],
R_z(θ) =
[ e^-iθ/2 0; 0 e^iθ/2 ]
H =1/√(2)[ 1 1; 1 -1 ]
CNOT =
[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0 ]
R(α, β, γ) =
[ cos(α/2) -e^iγsin(α/2); e^iβsin(α/2) e^iβ + iγcos(α/2) ]
|
http://arxiv.org/abs/2307.04784v1 | 20230710180000 | Positivity-causality competition: a road to ultimate EFT consistency constraints | [
"Mariana Carrillo González",
"Claudia de Rham",
"Sumer Jaitly",
"Victor Pozsgay",
"Anna Tokareva"
] | hep-th | [
"hep-th"
] |
=1
decorations.pathmorphing
arrows.meta
arrows,positioning,decorations.markings,decorations.pathmorphing,calc
edgelayer
nodelayer
decorations.pathreplacing
edgelayer,nodelayer,main
none/.style=draw=none
new edge style 2/.style=black
new style 0/.style=black
rednode/.style=draw=none, scale=0.3pt,fill=red,circle, draw
redline/.style=line width=0.3mm,red
greyE/.style=line width=0.1mm,gray
arrows,positioning,decorations.markings,decorations.pathmorphing,calc
|
http://arxiv.org/abs/2307.04099v1 | 20230709052131 | GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty | [
"Tao Wu",
"Tie Luo",
"Donald C. Wunsch"
] | cs.LG | [
"cs.LG",
"cs.CR",
"cs.CV"
] |
Visible and infrared self-supervised fusion trained on a single example
Nati Ofir
August 12, 2023
=======================================================================
Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models, where insider knowledge about the target models is not required. Previous methods often generate AE with no or very limited transferability; that is, they easily overfit to the particular architecture and feature representation of the source, white-box model and the generated AE barely work for target, black-box models. In this paper, we propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP). It drives the loss function optimization procedure to converge to a flat region of local optima in the loss landscape. By attacking 11 state-of-the-art (SOTA) deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability. We also demonstrate that it is very flexible in that it can be easily integrated with other gradient based methods for stronger transfer-based attacks.
Adversarial machine learning, Transferability, Deep neural networks, Input gradient regularization
§ INTRODUCTION
Deep Neural Networks (DNNs) are the workhorse of a broad variety of computer vision tasks but are vulnerable to adversarial examples (AE), which are data samples (typically images) that are perturbed by human-imperceptible noises yet result in odd misclassifications.
This lack of adversarial robustness curtails and often even prevents deep learning models from being deployed in security or safety critical domains such as healthcare, neuroscience, finance, and self-driving cars, to name a few.
Adversarial examples are commonly studied under two settings, white-box and black-box attacks. In the white-box setting, adversaries have full knowledge of victim models, including model structures, parameters and weights, and loss functions used to train
the models. Therefore, they can directly obtain the gradients of the victim models and seek adversarial examples by misleading the loss function toward incorrect predictions. White-box attacks are important for evaluating and developing robust models and serve as the backend method for many black-box attacks, but is limited in use due to its requirement of having to know the internal details of target models. In the black-box setting, adversaries do not need specific knowledge about victim models other than their external properties (type of input and output). Two types of approaches, query-based and transfer-based, are commonly studied for black-box attacks. The query-based approach attempts to estimate the gradients of a victim model by querying it with a large number of input samples and inspecting the outputs. Due to the large number of queries, it can be easily detected and defended. The transfer-based approach uses surrogate models to generate transferable AE which can attack a range of models instead of a single victim model. Hence it is a more attractive approach to black-box attacks.
This paper takes the second approach and focuses on designing a new and effective method to improve the transferability of AE. Several directions for boosting adversarial transferability have appeared. Dong et al. <cit.> proposed momentum based methods. Attention-guided transfer attack (ATA) <cit.> uses attention maps to identify common features for attacking. Diverse Input Method (DIM) <cit.> calculates the average gradients of augmented images. <cit.> generates transferable AE using an ensemble of multiple models.
Despite the efforts of previous works, there still exists a large gap of attack success rate between the transfer-based setting and the ideal white-box setting.
In this paper, we propose a novel method to boost adversarial transferability from an optimization perspective. Inspired by the concept of “flat minima” in the optimization theory <cit.>
which improves the generalization of DNNs, we seek to generate AE that lie in flat regions where the input gradient norm is small, so as to “generalize” to other victim models that AE are not generated on. In a nutshell, this work makes the following contributions:
* We propose a transfer-based black-box attack from a new perspective that seeks AE in a flat region of loss landscape by penalizing the input gradient norm.
* We show that our method, input gradient norm penalty (GNP), can significantly boost the adversarial transferability for a wide range of deep networks.
* We demonstrate that GNP can be easily integrated with existing transfer-based attacks to produce even better performance, indicating a highly desirable flexibility.
§ METHOD
Given a classification model f(x): x ∈𝒳→ y ∈𝒴 that outputs a label y as the prediction for an input x, we aim to craft an adversarial example x^* which is visually indistinguishable from x but will be misclassified by the classifier, i.e., f(x^*) ≠ y. The generation of AE can be formulated as the following optimization problem:
max _x^*ℓ(x^*, y), s.t. x^*-x_p ≤ϵ,
where the loss function ℓ(·, ·) is often the cross-entropy loss, and the ł_p-norm measures the discrepancy between x and x^*. In this work, we use p=∞ which is commonly adopted in the literature. Optimizing Eq. (<ref>) needs to calculate the gradient of the loss function, but this is not feasible in the black-box setting. Therefore, we aim to create transferable AE on a source model yet can attack many other target models.
We develop a new method to boost adversarial transferability from a perspective inspired by “flat optima” in optimization theory. See Fig. <ref>. If an AE is located at a sharp local maximum, it will be sensitive to the difference of decision boundaries between the source model and target models. In contrast, if it is located at a flat maximum region, it is much more likely to result in a similar high loss on other models (which is desired).
Thus, we seek to generate AE in flat regions. To this end, we introduce a gradient norm penalty (GNP) term into the loss function, which penalizes the gradient norm of the loss function with respect to input. The reason is that flat regions are characterized by small gradient norms, hence penalizing the gradient norm will encourage the optimizer to find an AE that lies in a flat region. We thus enhance the adversarial transferability since a minor shift of decision boundary will not significantly change the loss value (prior work has shown that different networks often share similar decision boundaries).
§.§ Baseline Attacks
GNP is a very flexible method in that it can be easily incorporated into any existing gradient based method to boost its strength. We consider the following existing, gradient based attacks to demonstrate the effect of GNP.
Later in sec:experiments, we will also show how GNP works effectively on state-of-the-art transfer-based attacks as well.
Fast Gradient Sign Method (FGSM). FGSM <cit.> is the first gradient-based attack which crafts an AE x^adv by attempting to maximize the loss function J(x^adv, y; θ) with a one-step update:
x^adv=x+ϵ·sign(∇_x ℓ(x, y; θ)),
where ∇_x J(x, y; θ) is the gradient of loss function with respect to x, and sign(·) denotes the sign function.
Iterative Fast Gradient Sign Method (I-FGSM). I-FGSM extends FGSM to an iterative version:
x_t+1^adv = x_t^adv + α·sign(∇_x_t^advℓ(x_t^adv, y; θ)),
x_0^adv = x,
where α=ϵ / T is a small step size and T is the number of iterations.
Momentum Iterative Fast Gradient Sign Method (MI-FGSM). MI-FGSM <cit.> integrates a momentum term into I-FGSM and improves transferability by a large margin:
g_t+1 = μ· g_t + ∇_x_t^advJ(x_t^adv, y; θ)/∇_x_t^advJ(x_t^adv, y; θ)_1,
x_t+1^adv = x_t^adv + α·sign(g_t+1),
where g_0 = 0 and μ is a decay factor.
§.§ GNP Attack
As explained in sec:method, we aim to guide the loss function optimization process to move into a flat local optimal region.
To this end, we introduce GNP to penalize large gradient norm, as
L(x, y) = ℓ(x, y)-λ∇_x ℓ(x, y)_2
where ℓ(·) is the original loss function of the source model, and the regularization term is our GNP, which encourages small gradient norm when finding local maxima.
For gradient based attacks (e.g., FGSM, I-FGSM, MI-FGSM, etc.), we need to calculate the gradient
of the new loss (<ref>). To simplify notation, we omit y in the loss function since we are calculating gradient with respect to x. Using the chain rule, we have
∇_x L(x)=∇_xℓ_(x)-λ∇_x^2 ℓ_(x) ∇_xℓ_(x)/∇_xℓ_(x)
This equation involves the calculation of Hessian matrix H = ∇_x^2 ℓ_(x). This is often infeasible because of the curse of dimensionality (such a Hessian matrix in DNNs tends to be too large due to the often large input dimension). Therefore, we take the first-order Taylor expansion together with the finite difference method (FDM) to approximate the following gradient:
∇_x L_(x+rΔx)≈∇_xℓ_(x)+H rΔx
where Δx=∇_x ℓ(x)/∇_xℓ(x), and r is the step length to control the neighborhood size. Thus we obtain the regularization term of (<ref>) as:
H∇_xℓ_(x)/∇_xℓ_(x)≈∇_xℓ(x+r ∇_xℓ_(x)/∇_xℓ_(x))-∇_xℓ(x)/r
Inserting (<ref>) back into (<ref>), we obtain the gradient of the regularized loss function as:
∇_x L(x)=(1+β) ∇_xℓ_(x) -β∇_xℓ_(x+r ∇_xℓ_(x)/∇_xℓ_(x))
where β=λ/r is the regularization coefficient. We summarize the algorithm of how GNP is integrated into I-FGSM in Algorithm <ref>, but I-FGSM can be replaced by any gradient based attack.
§ EXPERIMENTS
§.§ Experiment Setup
Dataset and models. We randomly sample 5,000 test images that can be correctly classified by all the models, from the ImageNet <cit.> validation set. We consider 11 SOTA DNN-based image classifiers: ResNet50 <cit.>, VGG-19 <cit.>, ResNet-152 <cit.>, Inc v3 <cit.>, DenseNet <cit.>, MobileNet v2 <cit.>, SENet <cit.>, ResNeXt <cit.>, WRN <cit.>, PNASNet <cit.>, and MNASNet <cit.>. Following the work in <cit.>, we choose ResNet50 as the source model and the remining 10 models as target models.
Implementation Details. In experiments, the pixel values of all images are scaled to [0, 1]. The adversarial perturbation is restricted by 3 scales ϵ=4/255,8/255,16/255. The step length is set as r=0.01 and regularization coefficient β=0.8, we run 100 iterations for all attacks and evaluate model misclassification as attack success rate.
§.§ Experimental Results
§.§.§ Integration with baseline attacks
We first evaluate the performance of GNP by integrating it with baseline attacks including I-FGSM and MI-FGSM. The results are shown in tab:1. We use a pre-trained ResNet50 as the source model and evaluate the attack success rate (ASR) of the generated AE on a variety of target models under different scales of perturbation ϵ. GNP achieves significant and consistent improvement in all the cases. For instance, taking the average ASR of all the 10 target models under perturbation ϵ = 8/255, GNP outperforms I-FGSM and MI-FGSM by 26.51% and 13.67%, respectively. In addition, the improvements of the attack success rates on a single model can be achieved by a large margin of 33.06%.
§.§.§ Integration with existing transfer-based attacks
Here we also evaluate the effectiveness of GNP when incorporated into other transfer-based attacks such as DIM <cit.> and TIM <cit.>. The results are given in tab:2 and show that DIM+GNP and TIM+GNP are clear winners over DIM and TIM alone, respectively. Specifically, DIM+GNP achieves an average success rate of 91.95% under ϵ = 16/255 for the 10 target models, and TIM+GNP outperform TIM by a large margin of 16.28% under ϵ = 8/255. We note that we only present the integration of GNP with two typical methods here, but our method also apply to other more powerful gradient-based attack methods.
§.§.§ Attacking “secured” models
For a more thorough evaluation, we also investigate how GNP will perform when attacking DNN models that have been adversarially trained (and hence are much harder to attack). We choose three such advanced defense methods to attack, namely, JPEG <cit.>, R&P <cit.> and NRP <cit.>. In addition, we choose another three ensemble adversarially trained (AT) models, which are even harder than regular AT models, and attack them: Inc-v3_ens3, Inc-v3_ens4 and IncRes-v2_ens1 <cit.>. We craft AE on the ResNet50 surrogate model with ϵ=16/255, and use DIM+TIM as the “backbone” to apply GNP. The results are presented in tab:3, where we can see that GNP again boosts ASR significantly against the six “secured” models, achieving consistent performance improvements of 11.46–14.37%.
§.§ Ablation Study
We conduct ablation study on the hyper-parameters of the proposed GNP attack, i.e., step length r and regularization coefficient β. Since r represents the radius of neighborhood that is flat around current AE, a larger r is preferred; on the other hand, setting it too large will increase the approximation error of Taylor expansion and thus mislead the AE update direction. The β is to balance the goal of fooling the surrogate model and finding flat optima. fig:ablation reports the results of our ablation study, where ASR is averaged over 10 target models (excluding the source ResNet50) attacked by I-FGSM + GNP with ϵ=8/255. We observe that adding the GNP regularization term clearly improves performance (as compared to β=0) and the performance gain is rather consistent for β in a wide range of 0.6–1.6. The step length r does not affect the performance gain too much either, and r=0.01 seems to be the most stable. Thus, the ablation study reveals that GNP is not hyper-parameter sensitive and works well in a variety of conditions.
§ CONCLUSION
In this paper, we have proposed a new method for improving the transferability of AE from an optimization perspective, by seeking AE located at flat optima. We achieve this by introducing an input gradient norm penalty (GNP) which guides the AE search toward flat regions of the loss function. This GNP method is very flexible as it can be used with any gradient based AE generation methods. We conduct comprehensive experimental study and demonstrate that our method can boost the transferability of AE significantly.
This paper focuses on untargeted attacks, but GNP can be rather easily applied to targeted attacks as well, by making a small change to the loss function. We plan to have a thorough investigation in future work.
IEEEbib
|
http://arxiv.org/abs/2307.04982v1 | 20230711024151 | Ultra Electron Density Sensitivity for Surface Plasmons | [
"Wei Liu",
"Meng Li",
"Yu Niu",
"Ziren Luo"
] | physics.optics | [
"physics.optics",
"cond-mat.mes-hall"
] |
These authors contributed equally to this work.
Center for Gravitational Wave Experiment, Institute of Mechanics, Chinese Academy of Science, 15 Bei-si-huan West Road, Beijing, 100190, China.
These authors contributed equally to this work.
Key Laboratory of Analytical Chemistry for Life Science of Shaanxi Province, School of Chemistry and Chemical Engineering, Shaanxi Normal University, Xi'an, 710062, China.
[email protected]
Center for Gravitational Wave Experiment, Institute of Mechanics, Chinese Academy of Science, 15 Bei-si-huan West Road, Beijing, 100190, China.
[email protected]
Center for Gravitational Wave Experiment, Institute of Mechanics, Chinese Academy of Science, 15 Bei-si-huan West Road, Beijing, 100190, China.
We investigate surface plasmons from a solid-state standpoint and highlight their ultra electron density sensitivity. When a surface plasmon is excited on a planar gold film by an evanescent wave from 625 nm light, only a minute fraction of the surface electron density, approximately one thousandth, participates in the process. By introducing a noise-depressed surface potential modulation, we reduce the electron density to the order of 10□, enabling electron sensitivity on the order of 0.1 e. As a practical application, we develop a surface plasmon resonance imaging method capable of detecting single anions in solution at a concentration of 1.
Ultra Electron Density Sensitivity for Surface Plasmons
Ziren Luo
August 12, 2023
=======================================================
The label-free single molecule imaging technique has come into the spotlight for it can not only yield the insights about the dynamic molecule interactions that are fundamentally inaccessible to fluorescence-based methodologies but also enjoy the multiplexing capability <cit.>. Several techniques based on the light-scattering phenomenon cross the barrier of the large mismatch between the size of the molecule and the diffraction limit of visible light successfully. Interferometric scattering microscopy (iSCAT), a recently developed technique <cit.> which collects the scattered photons across a molecule and reduces the background scattering, is able to provide the molecular weight of the molecule in solution <cit.>. By measuring the inelastic light-scattering process, surface-enhanced Raman scattering (SERS) <cit.> and tip-enhanced Raman spectroscopy (TERS) <cit.> can tell the Raman "fingerprint" of the target molecule since a high degree of structural information about the molecule can be extracted from the SERS/TERS vibrational spectrum. Although these scattering-based methods have achieved a remarkable success, the scattering efficiency is restricted by the effective scattering cross section of the molecule. The optical resolution of iSCAT is claimed around 10 at room temperature in solution while that of TERS is about 315 in vacuum. Thus, it is still a challenge to observing single molecules beyond the restriction of the size under the condition when the target molecules are functionally active.
Besides the molecular size, other properties come into the picture of the single-molecule investigation among which reactions occurring at the single-molecule level is at the core. For chemical reactions, molecules transfer or share electrons at their specific electronic states: the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO). Holding electrons at the highest energy, the HOMO could transfer electrons to other molecules while the LUMO remains empty and capable of receiving electrons. Therefore, a possible strategy to bypass the scattering cross section restriction is to measure the electron density variations induced from the electronic states of the molecules at the sensing surface.
Under total internal reflection condition, the electric field of an evanescent wave can excite the collective oscillation of surface electrons, which is called surface plasmon resonance (SPR). Conventionally, a biosensor based on SPR is described by a Four-layer model governed by Snell's law: glass substrate/ plasmonic metal film/ molecular film/ buffer medium. Billions of molecules adsorbing at the sensing surface changes the thickness of the molecular film which can be measured by a SPR biosensor. However, when the concentration of the target molecules comes down to single-molecule level, less than 1 <cit.>. The model fails because it is hardly to take the sparsely adsorbate molecules as a film. As a result, the detection limit of a SPR biosensor has been considered far from the requirement of the single molecule detection. A typical refractive index detection limit for a phase-sensitive SPR scheme is reported as the order of 10^-8 to date <cit.> while the refractive index variation from 1 ions is estimated as 10^-14 in solution <cit.>. To exceed the limit, plasmon-enhanced nano-materials <cit.>, meta-surface <cit.>, and whispering-gallery-mode <cit.> are introduced to increase the sensitivity of the biosensor. Although these micro/nano-structure based solutions reveal exciting sensitivity, their applications have some fundamental restrictions. For one thing, the signal amplitude of these methods crucially depends on the location where the molecule binds with the structure while these binding events are outside the control. For another, the large-scale and reproducible production of the complicated sensing unit imposes a technical limitation. Here, we investigate surface plasmons from a solid-state perspective and demonstrate the ultra electron sensitivity of a planar SPR film to measure surface electron density variations. As an application, we have developed an an energy level aligned SPR imaging (ELA-SPRi) system to detect the adsorption and desorption of single anions in solution.
The sensitivity of SPR to the surface charge has been reported since 1970s <cit.> and used to detect charged particles at the sensing surface <cit.>. This sensitivity is believed from the agreement that the dielectric function of the metal is determined by the electron density of the metal and an applied bias potential on the surface can modulate the electron density <cit.>. However, this understanding overlooks the plasmonic side of the story. From solid-state perspective, the collective oscillation of surface electrons in Fig. <ref>(a) with the wave-vector of the surface plasmon, k_sp, occurs near the Fermi surface of the metal. In other words, the electron density involved in surface plasmon, n_sp, is only a fraction of the background surface electron density, n_s, in momentum space in Fig. <ref>(b), and can be given by
n_sp = k_sp/k_Fn_s
where k_F is the Fermi wave-vector for gold, around 1.21d4. The resonance condition, the match between the wave-vectors of the evanescent wave, k_x, and that of the surface plasmon, k_sp, gives the estimation of k_sp/k_F about 10^-3, since k_sp = k_x≈ 2π/λ≈ 10 for the visible light with the wavelength λ = 0.625.
Eq. <ref> suggests the SPR signal be an indicator to the surface electron density. At the noble-metal surface, the surface electron density follows a two-dimensional electron gas (2DEG) model by <cit.>:
n_s = n_2D = m/πħ^2k_BTln( 1+expE_f/k_BT)
where m is the electron mass, ħ Planck's constant, k_B Boltzmann constant, T the temperature and E_f the Fermi level of the metal.
According to Eqs. <ref> and <ref>, the Fermi level of the metal can modulate the SPR signal. To modulate the Fermi level of a planar SPR film, we use a phase sensitive SPR imaging system combined with an external signal generator which can apply a bias potential to the sensing surface according to the requirements as illustrated in Fig. <ref>(c). The sensing surface of a 48nm gold film is divided into two insulated cell: one as the working cell connected with the signal generator and the other as the reference cell recording the light power fluctuation during the measurement. A region of interest (ROI) is selected in each cell to obtain the signal intensity of the region. In Fig. <ref>(c) , ROI 1 is in the working cell and ROI 2 in the reference cell. We use the differential of the intensities from both regions as the measured signal of which the noise from the light power fluctuation can be depressed.
Fig. <ref>(d) shows an SPR signal under a linear bias potential scanning from -0.51 at the rate of 0.1 in a 30 potassium hydroxide (KOH) electrolyte and the theoretical calculation of n_sp under different Fermi level according to Eqs. <ref> and <ref>. The agreement, especially from -0.50.4, supports our previous prediction that we can use the SPR signal to indicate the surface electron density. The deviation of the signal from the theoretical calculation is because of the oxidation of the gold film when the bias potential exceeds 0.5 in solution. It should be noted that our previous conclusion <cit.>, supported by other studies <cit.>, that the SPR signal is proportional to the applied potential still holds because of ln( 1+expE_f/k_BT)≈E_f/k_BT when E_f >> k_BT. For T=298 , k_BT is 25.7.
Eq. <ref> also implies the ultra sensitivity of SPR to electron density variations at the sensing surface. SPR reduces the background electron density to k_sp/k_F, 10^-3 approximately in our configuration. As an illustration, the background electron density is given by Eq. <ref>, about 4e5□ at E_f=100 and the corresponding n_sp is estimated by 400□. As a result, the signal from one electron increases from e-6e-3□ for SPR measurement.
Three technical factors block off the observation of this ultra electron density sensitivity: (1) the size of the detected spot, (2) the noise, and (3) the condition at which the individual molecules transfer their electrons to the sensing surface. In our prism-based SPR imaging system, the minimum detected spot is the area which a pixel is covering in the sensing surface, 300□ approximately ( 25 × 12 ). The number of the surface plasmon related electrons in the area, ρ, is about 10^5 per pixel and the expected signal from one electron in the area, 1/ρ, is 10^-6 per pixel, which is still difficult to detect. A method is needed to further reduce these surface plasmon background electrons.
An investigation of the surface plasmon electrons in momentum space under an external bias potential paves the way to the reduction in Fig. <ref> (a). On the one hand, as we have pointed out, the surface plasmon occurs near the Fermi surface of the metal. On the other hand, an external potential applied to the metal can modulate the Fermi level of the metal. In momentum space, the Fermi surface under the potential modulation, δ E_f, forms a ring with a certain number of electrons, δ n_s. The estimation of δ n_s in vacuum can be obtained by the derivative of Eq. <ref> with respect to E_f, or at a solid-liquid interface, it can be estimated by
δ n_s = cδ E_f/e
where c is the capacitance density of the interface. Under a surface plasmon condition, this ring also oscillates with k_sp. Therefore, instead of measuring the total surface plasmon electrons, we focus on the electrons in the area of the ring, δ n_sp. According to Eqs. <ref> and <ref>, we have:
δ n_sp = k_sp/k_Fcδ E_f/e
For a gold electrode, the order of the capacitance density is 10□ <cit.> 133and the corresponding electron density is about e4□ approximately when δ E_f= 100. The number of the surface plasmon electrons in response to δ E_f is estimated by 10□. Therefore, under δ E_f, the measured surface plasmon electrons in a pixel, ρ, is estimated by 10^3 and one electron will induce 10^-4 signal variation for 1/ρ.
To measure the signal variation under δ E_f, we can apply a bias potential modulation function to the sensing surface. When excess electrons are introduced to the surface, the SPR signal in response to δ E_f will change. Especially, when the applied potential modulation function satisfies a trigonometric function, we can convert the SPR signal from the time-domain into the frequency-domain by Fourier transform of δ E_f: δn̂_sp= k_sp/k_Fc/eℱ{δ E_f(t)}.
In order to measure electron density variations on the order of 10^-4 per pixel, noise is a second prevention, among which the power fluctuation is dominant. The large field of view of the imaging scheme can allow for the simultaneous recording of reflections from both the working cell and the reference cell. Because the reflections from both regions experience similar light power fluctuations during the measurement as shown in Fig. <ref>(b), the differential signal between the two regions can significantly reduce the effects of these fluctuations. To further improve the signal-to-noise ratio, sinusoidal potential modulations with an AC amplitude of 200 around different DC components at a frequency of 1.1 are introduced to the working cell.
The comparison between the recorded signal in the working cell and the differential signal in Fig. <ref>(c) demonstrates a significant improvement in the signal-to-noise ratio for the latter. The power fluctuations in the recorded signal are measured to be 4× 10^-3, whereas the differential signal fluctuates less than 10^-3. The modulation signal is barely noticeable in the raw signal but can be clearly distinguished in the differential signal. The corresponding amplitude density spectrum in Fig. <ref>(d) provides a comprehensive noise analysis. Prior to the depression scheme, the noise ranges from 10^-3 √() within the frequency band from 101, and approaches 10^-2 √() within the low-frequency band from 1e-2. After the differential, the noise is near the signal level, 10^-4 √(), while the integration of the modulated signal at 1.1 for one minute amplifies the measured signal to above 10^-3 √(), providing the technique with an ability to analyze the electron density variation less than one electron.
In the frequency-domain, we can estimate the SPR signal, I_AC, by
I_AC = KI_0δρ/ρ√(τ f/2)
where K is the constant determined by the optical setup, about 0.85 in our configuration, I_0 the background intensity of the surface, 157 grayscale levels during the experiment, τ integration period and f is sampling frequency of CCD, optimized at 10Hz.
According to Eq. <ref>, both electron density variations and corresponding integral time determine frequency-domain signals. For typical electron density variations from 10^-5 per pixel per minute to 10^-2 per pixel per minute, the frequency-domain signals change linearly from 0.1 grayscale levels to 100 grayscale levels in Fig. <ref>(a). For one electron variation in a pixel fitted in our configuration, 1/2500 per pixel approximately, the frequency-domain signal during one-minute reaches 4 grayscale levels. In Fig. <ref>(b), the signal approaches to 4 grayscale levels by square root of the integral time, √(τ).
The condition at which the individual molecules transfer their electrons to the sensing surface is determined by the DC components of the potential modulations which set the Fermi level basis of the surface. The individual molecules transferring their electrons to the sensing surface occurs only when the frontier molecular orbital levels of the molecule, HOMO and LUMO, are aligned with the Fermi level of the surface as illustrated in Fig. <ref>(a) <cit.>. This energy level alignment changes the density of states (DOS) of the surface significantly around HOMO/ LUMO levels and subtly among other levels after the adsorption of an individual molecule. As a result, the electron density of the surface will be changed because of the alignment.
To demonstrate the sensitivity of this ELA-SPRi imaging method, potassium chloride (KCl) and potassium sulfate (K2SO4) have been diluted by 30 KOH seperately to provide 1 chloride anions (Cl^-), and sulfate anions (SO_4^2-). Both can be adsorbed on the gold surface <cit.>. In Fig. <ref>(b), (c) and (d), we have measured the adsorption and desorption of both anions at three different bias potential modulations. Although it is difficult to detect the processes under the modulation around 50, we can distinguish the adsorption and desorption processes under the modulations around 250 and 450. For the Fermi level of the surface around 50, the adsorption between the anions and the surface are physcial adsorption where molecules or atoms adhere to a surface through weak intermolecular forces, such as van der Waals forces or dipole-dipole interactions. However, when the Fermi level is aligned with the the frontier molecular orbital levels of the anions, fractional electrons transfer between the molecule and the surface and the chemical adsorptions occur. For the alignment around 250 near the HOMO levels of both anions, the electrons transfer to the surface and the adsorption signals decrease while for the alignment around the LUMO levels, 450, the electrons transferring from the surface to the molecule increases the adsorption signals. It is interesting to notice that the opposite direction of the signal variations suggests the opposite electron transfer direction.
It is also noticeable that traditionally we take chemical adsorptions irreversible. However, when the reactions occurs at single-molecule level, we observe the reversibility of the chemical adsorptions. When we wash the working cell with the electrolyte solution, 30 KOH, at 1300 and 2500, the return of the signals in Fig. <ref>(c) and (d) indicates the desorption of the anions. Both anions introduce the similar intensity variations, about 0.4 grayscale levels, demonstrating about 0.1 e be introduced to the ROI on average. Thus, we can adapt the conventional Four-layer model to a three-layer model, glass substrate/ plasmonic metal film/ buffer medium, when the concentration of the detected molecules comes down to a single-molecule level. Instead of the molecular film thickness variation, the sparsely adsorbate molecules change the dielectric function of the plasmonic metal when the Fermi level is modulated around the frontier molecular orbital levels of the molecules.
In conclusion, we have demonstrated the ultra electron density sensitivity for surface plasmons by investigating surface plasmon involved electrons from a solid-state perspective. The plasmonic nature can reduce the background electron density to the order of 10□ and be used to detect electron variations about 0.1 e. Based on this principle, we have developed an ELA-SPRi technique to measure the adsorption and desorption processes of individual anions at a planar gold electrode. This optical method, bypassing the molecular size restriction, provides a strategy to observing the electronic states of the individual molecules on a relative large structure and we plan to develop a single-molecule locating imaging technique based on the method. Furthermore, considering its compatibility with the conventional SPR-based technology and less limitations of the complicated sensing unit preparation, we believe ELA-SPRi will not only facilitate sensing in biophysical applications <cit.> but also be capable of analyzing individual chemical reactions in the field of catalysis engineering and energy <cit.>.
W. L. is grateful to Dr. Sixing Xu for his insightful suggestion that the detection response be from the electron density variation at the very beginning of this work and Miss Yu Miao for her constant urging the author to polish this work. This work was financed by National Key R&D Program of China, Director Fund of Institute of Mechanics, and Fundamental Research Funds for the Central Universities.
|
http://arxiv.org/abs/2307.03996v1 | 20230708153748 | ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation | [
"Saifullah Mahbub",
"Md. Easin Arafat",
"Chowdhury Rafeed Rahman",
"Zannatul Ferdows",
"Masum Hasan"
] | cs.SE | [
"cs.SE"
] |
[email protected]
Code review is considered a key process in the software industry for minimizing bugs and improving code quality. Inspection of review process effectiveness and continuous improvement can boost development productivity. Such inspection is a time-consuming and human-bias-prone task. We propose a semi-supervised learning based system ReviewRanker which is aimed at assigning each code review a confidence score which is expected to resonate with the quality of the review. Our proposed method is trained based on simple and and well defined labels provided by developers. The labeling task requires little to no effort from the developers and has an indirect relation to the end goal (assignment of review confidence score). ReviewRanker is expected to improve industry-wide code review quality inspection through reducing human bias and effort required for such task. The system has the potential of minimizing the back-and-forth cycle existing in the development and review process. Usable code and dataset for this research can be found at: https://github.com/saifarnab/code_review
<ccs2012>
<concept>
<concept_id>10011007.10011074.10011081</concept_id>
<concept_desc>Software and its engineering Software development process management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering Software development process management
ReviewRanker: A Semi-Supervised Learning Based Approach for Code Review Quality Estimation
Masum Hasan
August 12, 2023
==========================================================================================
§ INTRODUCTION
The editorial world has been using peer review since 1731 <cit.>.
Modern software development industries have given it a more common name: Code Review. Since then Modern Code Review (MCR) <cit.> has become an essential part of software development. MCR is a software quality control process in which one or a group of people evaluates the system by examining and analyzing different parts of source code which can be done either during or after the completion of the implementation phase. The purpose of code review is to find bugs, correct mistakes, and boost the consistency of code by improving performance and by reducing security vulnerabilities.
Figure <ref> outlines a typical code review process. A developer or a set of developers prepares the codes and submits them for review. A reviewer or a subgroup of reviewers then performs review checking and makes sure that the author’s codes cause no system failures in other parts of the codebase. They also ensure consistent coding style and design pattern. Following all these checking and evaluations, the reviewer or the subgroup of reviewers who have a higher role either approve or reject these reviews. Developers then make changes in codes, revise their works based on the feedback, or provide appropriate explanations against the approved review until both parties are satisfied.
Sometimes a reviewer figures out the problematic part of the reviewed code but fails to submit an appropriate explanation of the problem. In such cases, the changes made by the developers will probably not satisfy the reviewer and we are going to get another couple of develop-review cycles. Such cycles can lead to substantial decrease in productivity in the software industry.
It is possible to minimize such situations if we can somehow assign each review a quality score. Such scoring will help us in (a) gaining a deeper understanding of quality reviews, (b) identifying quality reviewers in the company and (c) estimating provided review quality before sending off to the developers. Essentially, if after going through a particular review, a developer feels confident about the changes that he has to make in the codebase, then that review is probably of good quality. In this paper, we focus on modeling the developer confidence in a review.
One way is to simply form this task as a supervised learning task where the input will be a review and the output will be the confidence score for that review. The output labeling will be performed by the developer to whom the review had been sent for making changes in the codebase. Figure <ref> shows the problem behind such labeling. We can see a review in the figure which has been marked as good, average, below average and poor by a significant set of developers from three different software companies. We performed this experiment on 25 reviews in total and got more or less similar results. Let us understand what this means. There are developers who are broad minded and will give good score even when the review is not that good. The opposite spectrum is also equally visible in the industry. The score assigned by a developer also depends on what type of mood he is in at that particular moment. In short, this labeling process is highly dependent on human perception which can vary widely from person to person.
We propose an alternative labeling scheme in this paper which indirectly trains a set of three models and enables them in predicting the confidence scores for a particular set of reviews. We call this semi-supervised learning approach ReviewRanker. The labeling is related to three simple multiple choice questions (for the three models) regarding - (a) the understanding of the type of change to perform in the code, (b) the understanding of what to insert and (c) what to delete from the code based on the review of interest. We performed a similar experiment (as of Figure <ref>) with these three multiple choice questions and found out that the choices made by the developers from different companies are similar unless the review is largely vague. Thus we have come to a conclusion that the answer to these questions are not biased by the human perception side of the developers.
During inference (after training is done with a set of labeled reviews), we provide a code review as input to the three models for predicting the answer to the three questions (see Figure <ref>). We get three confidence scores from these three models corresponding to the ground truth answers of these questions (labeled by a developer in advance). We obtain the final confidence score from these three scores. Thus we model the confidence of the developer in understanding the review given to him or her.
Mainly three types of related studies have been performed regarding code review analysis: (1) theoretical studies on different aspects of code reviewing <cit.>, (2) assisting reviewers by problematic code snippet identification <cit.> and (3) reviewer recommendation <cit.>. Although RevHelper <cit.> was developed to measure code review usefulness, it is actually a binary classification tool (useful vs not useful) and does not provide any quality score to the review of interest. Also this method has the human bias aspect that we have mentioned in detail in Figure <ref>.
§ PROBLEM DEFINITION
The input of ReviewRanker is a large set of code reviews R. The output is a confidence score C_i for each review R_i ∈ R, where C_i ∈ [0, 1]. Higher confidence score denotes higher review quality.
C_i is the combination of three different confidence scores coming from three different questions related to review R_i. The answer of each question Q_ij is predicted by a model M_j that forms the question answering as a binary classification task. We get a confidence score C_ij (associated with the ground truth label answer) from each model M_j for each question Q_ij for the review of interest R_i. The final confidence score C_i of review R_i is the geometric mean of all C_ij's, where j ∈{1,2,3}.
The three questions are as follows:
* What type of operation (change in code) did the code review suggest (multi-class classification)?
* Did you understand what to insert in the code from the review (binary classification)?
* Did you understand what to delete from the code reading the review (binary classification)?
Unlike questions related to directly assigning a quality score to a review, these three questions are straightforward and have little to no human bias.
§ RELATED WORKS
Researches have been undertaken to automate the process of reviewing code by using static checks such as standard violation, and common structure defects; while other researchers have focused on automating the process of reviewer recommendation and problematic code detection.
§.§ Studies on Code Review
Semi-structured individual interviews were conducted with seven developers from Microsoft in <cit.>. They concluded that prior knowledge of files leads to useful comments and tends to increase efficiency. The contemporary code review process at Microsoft was looked into in <cit.>. Research shows that the average spending time in a week for Microsoft developers is four hours in code review, while open source developers take five hours. Microsoft developers give more attention to reviewing relationships with developers compared to open-source developers.
An observational survey on Mozilla’s 88 core developers was conducted in <cit.>. The authors found out that approximately 57-69% developers reviewed fewer than 5 patch files, 10% developers reviewed 11 to 20 such files and 4% developers reviewed more than 21 patch files each week. A study described why code review is responsible for evaluating the reliability of test codes and what professional developers do to review test codes by analyzing 300,000 code reviews from open-source projects <cit.>.
§.§ Code Review Automation Empirical Studies
A prototype tool named Code Distance Visualiser was proposed in <cit.> to detect problematic codes like string overflow, memory leaks, null pointer references, and incorrect API usages. ReviewBot model was proposed in <cit.> where they automated the checking for source code by using a static analyzer and recommended reviewers based on the belief that every line of code had a past history. cHRev model used three measurement metrics to measure the expertise of the reviewers based on their review comments: 1) higher number of review count, 2) reviewer’s effort in the workday and 3) higher weight assignment to the latest reviews <cit.>. RevFinder, a recommendation model for reviewers based on file location was developed in <cit.>. According to their heuristics, identical path files should be reviewed by identical reviewers. To analyze similar file paths, they used four string comparison techniques: 1) longest common prefix, 2) longest common suffix, 3) longest common subsequence and 4) longest common substring. RevRec developed in <cit.> consists of two models: the reviewer expertise model (RevRecRE) and the reviewer collaboration model (RevRecRC). They evaluated three open-source projects - Android, OpenStack, and Qt. A comparative study on code review usefulness was conducted based on textual features and reviewer expertise in <cit.>. The authors proposed a machine learning model named RevHelper to predict the usefulness of a review comment. Their comparative study was based on two heuristics - 1) differences between useful and non-useful reviews and 2) how the reviewers' experience helps them to provide appropriate reviews.
§ DATASET DESCRIPTION
The steps regarding the dataset creation process for this research has been briefly shown in the leftmost box of Figure <ref>. We shall describe each of these steps in detail in this section.
§.§ Data Source
We have collected our data from multiple open-source projects hosted in Gerrit [https://www.gerritcodereview.com/]. Gerrit is a popular tool for code review in both open-source and commercial code repositories. Gerrit provides an easily accessible REST API [https://gerrit-review.googlesource.com/Documentation/rest-api.html] for collecting code reviews and their related codes. We have created a Gerrit Miner using Java that mines code reviews from open source code repositories such as Android & Iotivity and stores them in a MySQL database. We later query the database and label the reviews with different criteria described in detail in the upcoming subsections.
§.§ Data Labeling
We have created a labeling application with the Django framework in Python <cit.>.
The labeling app was designed to be user-friendly and intuitive. On entry, the web app asks for the login credentials of the user. Once it is provided, it directly goes to the labeling page and displays a code review comment to the user. The user is asked what type of operation (change type in code) the code review suggests (see Figure <ref>). Four options are provided in the form of a drop-down menu: Insert, Delete, Replace, and Not Enough Information. The web app provides the private URLs to the source code, and by clicking the link the user can view the source code, where the code review was submitted, and the later modification (accepted by reviewer) in the source code side by side (see Figure <ref>).
When the user selects one of the four operations from the drop down menu, he/she is also asked to provide the code snippet that is impacted by the operation. If the operation is an Insert operation, the user is supposed to provide the code snippet that was to be inserted in a text field named Add Code (only if it is understood from the review what was to be inserted). If the operation is a Remove operation, the user puts the code that was to be removed from the original code in the text box named Remove Code (only if it is understood from the review what was to be removed). If the operation is a Replace operation, the user puts the part of the code that changed in Remove Code text box, and the part that it changed into in the Add Code text box (only if both these parts can be understood from the code review alone). We also took a human-centric design approach to design the labeling app. Each time a sample data was submitted, the web page changed the background color so that the labeling process would not become monotonous and also would give a sense of progress to the user.
§.§ Label Validation
The reviews were labeled by a team of five independent volunteers who possess substantial experience in programming. All the labelers are from Computer Science background and have more than two years of working experience with programming languages such as C and Java, specifically in the areas of Android and Iotivity. To ensure consistency in the labeling process, 10% of the reviews were given to all the participants for labeling. The remaining 90% of samples were unique for each labeler. The admin frequently examined 10% of the data labels to check for any discrepancies among the labelers. If there was a considerable variation in the labeling, appropriate measures were taken to make the data labels more consistent. Later on, the entire dataset was manually labeled and reviewed by senior software developers to ensure proper validation of the assigned labels. The final confirmation for the labeling was obtained from the admin and considered conclusive for this dataset.
§ MATERIALS AND METHODS
Figure <ref> provides an overview of the steps in developing ReviewRanker. We have already described the dataset creation step in the previous section. In this section, we are going to elaborate the next four steps which are more related to ReviewRanker training and inference phase.
§.§ Data Preprocessing
§.§.§ Data Labeling:
Our initial dataset consisted of 2052 review comments. After the elimination of redundant samples, we are now left with 1483 sample reviews in our final dataset. Let us talk about the ground truth label assignment process for the three multiple choice questions asked for each review (the three questions can be found in Section <ref>). In real life scenario, the ground truth labels associated to a particular review are expected to be assigned by the developer/ developers to whom the review is directed to during the development process. Observing the questions, it is evident that it will take little to no effort from the developers to perform this labeling process.
We start with the operation (code change) related question. We define four types of operations: (1) replace (class label 0), (2) delete (label 1), (3) insert (label 2) and (4) not enough information (no label assigned). If a review operation is assigned as "not enough information", then we simply assign that review a confidence score of 0 and exclude that review from ReviewRanker training and inference.
The next two questions are about understanding of what to insert and what to remove from the current code base (both are binary classification tasks). If it is clear from the review what to insert, then the insertion related question receives ground truth label of 1, else the label is 0. The exact same aspect goes for the deletion related question.
If the operation is labeled as "replace" (first question), then it is expected that the label of both the insertion and deletion related questions will be 1 (it will not always happen in non-ideal cases). Similarly, if the operation is labeled as "delete", then the label of deletion related question is expected to be 1, while the insertion related question will have a label of 0 in an ideal world; and the opposite aspect will happen if the operation is labeled as "insert".
Let us now look at an example review - “outer parens not needed”. The labels for this review are as follows:
Operation Type: delete (label 1)
Understanding of something to be added: nothing to add (label 0)
Understanding of something to be deleted: parentheses need to be deleted (label 1)
§.§.§ Similar Word Handling
Our corpus contains more than 3000 unique words, which is a large number considering the small corpus size (less than 1500 reviews). So, by replacing all semantically identical words with a single word, we minimize the word list, which helps our model find acceptable relationships between words. While doing so, we use both the process of word stemming and lemmatization. Using word-stemming, we can modify a word’s plural instance to singular, normalize grammatical state, and so on.
Consider the words provided below:
The above words are generated from the word “program”. Through the word-stemming process, we replace all of these words with the word program in our unique word list. Using word lemmatization, we can generate a similar set of words from a single word. For example, the word minor generates the following words:
These words are verbally similar to the word minor. Thus we replace all of these words with the word minor in our unique word list as well. By doing so, our corpus now contains around 1700 unique words.
§.§.§ Special Word Handling:
Our dataset contains code reviews that include a significant amount of special words specific to C code that have no real meaning but play a very important role in review comments. Our proposed model works based on the textual relationship between normal words and these special words. Hence we replace these words with some common words based on their operational characteristics. First, we lowercase the starting letter of all words in our corpus. After that for each of the words:
* If the word has any uppercase letter, then we replace the word with keywordvariable, considering we usually use camel case to write variables.
* Otherwise, if the word contains .h or #, then we replace the word with keyworddoth. The presence of such special characters denotes header files in C programming.
* Otherwise, if the word contains _, then we replace the word with keywordunderscore. Having an underscore in a word is a bit confusing, it may denote a function or a variable. That is why we treat them with a special keyword.
* Otherwise, If the word contains parenthesis, then we replace the word with keywordfunction, considering all functions must initiate with a pair of parentheses.
After such special keyword handling, our corpus now contains 1368 unique words which started with 3000 initially.
§.§ Feature Extraction
In order to feed a review to a model as input, We need a mathematical representation of that review. We have 1368 unique words in our preprocessed dataset (see Section <ref>). Each review
contains a subset of these words. So, we represent each review with a vector V of size 1368, where V_i represents the total count of word_i found in the review. Let us look at two examples:
Review sample 1: line over fifty characters you should reduce it to twenty characters.
Review sample 2: provide line level comment to line.
If we create a unique word list from this corpus, it would be:
We can index these words from 0 to 12. The feature vector for the two sample reviews is as follows:
Instead of utilizing word embedding based approaches such as Word2Vec <cit.> and FastText <cit.>, we have opted for a bag-of-words type of approach <cit.>. Word embedding produces semantic vectors for each word typically employed with recurrent neural networks (RNNs) <cit.>. However, due to our small dataset and straightforward classification tasks, we have observed that a basic shallow neural network with bag-of-words feature outperforms RNNs with word embeddings through five fold cross validation.
§.§ Model Details
Our proposed algorithm combines three models as shown in Table <ref>. Details of the classes present under each model can be found in Section <ref>. Each model is a fully connected vanilla neural network but with a different set of parameter values. The input layer is of size 1368 (word frequency vector: total unique word no. is 1368). M_1 and M_2 are used for binary classification while M_3 is used for multi-class classification (three classes). Relu activation function <cit.> has been used for the intermediate layers, while Softmax has been used for the output layer. A dropout of 20% has been applied between each consecutive hidden layers to prevent overfitting <cit.>. Categorical Cross Entropy <cit.> has been used as the loss function, while Adam (Adaptive Moment Estimation) optimizer <cit.> has been used for weight update.
§.§ Review Confidence Score Generation
Table <ref> illustrates the entire process of confidence score generation for two sample reviews (We assume that the three task specific models M_1, M_2 and M_3 are already trained).
The feature vector of each review is passed through all three models separately. Each model provides a discrete probability distribution of the task specific classes. For example, model M_3 always provides three probability values (sums to 1) for the three operation type specific classes. For each model, we only take the probability score associated with the ground truth class label (expected to be available for all reviews). Thus, for one review, we get total three confidence scores (predicted probability values) from the three models. The final confidence score is the geometric mean ((C_1 × C_2 × C_3)^1/3) of these three confidence scores. A higher confidence score denotes higher review quality, as it is expected that the developer confidence in such reviews will be high.
§.§ Confidence Score Generation for the Entire Review Set
The expected input to the ReviewRanker system is not a single review, but an entire set of labeled (the three questions/ tasks) reviews. The three models that are part of ReviewRanker are trained on a fraction of this labeled review set. The confidence scores for the reviews are obtained in a 10-fold cross validation style. Let us understand the entire process. Given a large set of labeled reviews S, we first randomly divide the set into 10 small disjoint subsets S_1, S_2, … S_10 of reviews. For fold no. i of the 10-fold cross validation, we use all S_j (j ≠ i) subsets of reviews for training the three models (from randomly assigned initial weights) and finally, use the trained models to predict the final confidence scores of the validation review subset S_i. After doing this 10 times for the 10 folds, we are going to get review confidence scores for all the reviews available in the entire review set S. The important thing to note here is that the confidence score of each review is obtained only when that review is part of the validation subset. This is done to avoid obtaining overfitted scores on training data (many of the confidence scores of training data are close to 1).
§ RESULTS AND DISCUSSION
§.§ Manual Inspection of Assigned Review Quality
We examine both the review text and its corresponding confidence score to gain insight into the behavior of the proposed ReviewRanker system. Our goal is to understand why certain reviews receive higher scores than others. To this end, we randomly selected several reviews with high, average, and low confidence scores and analyzed their content (shown in Table <ref>). Through our analysis, we discovered that reviews with higher confidence scores are generally easy to understand, provide clear suggestions for changes to the code, and use specific variable and function names. Reviews with average confidence scores are sometimes easy to understand but lack substantive information, are excessively long, or contain lengthy blocks of code. Reviews with very low confidence scores are often too short to understand, lack meaningful information, and include asterisks and other special characters. Since ReviewRanker is composed of three training based neural network models, it is a data hungry system. So, larger the provided review set, better will ReviewRanker be able to model the developer confidence in a particular review.
§.§ Model Performance
Table <ref> shows the dataset size and performance of the three ReviewRanker models across the 10 folds. The high mean validation accuracy shows that the models can learn to answer the three simple questions associated with review confidence score generation effectively and can generalize well to validation data. The reported performance has some implications on the usage of ReviewRanker. If for some particular set of code reviews, we see that the 10-fold cross validation performance is not upto the mark, then what it means is that the three models have not been able to understand how to answer the three questions for the provided reviews. In that case, the final confidence score provided by ReviewRanker will not be a reliable metric to measure review quality.
§.§ ReviewRanker Validation
ReviewRanker has not been validated at industry-wide scale. We have made effort of validating ReviewRanker at small scale in three different software companies. But just as we have mentioned in the Introduction section, there is high human bias when it comes to assigning some kind of quality score to a review manually as part of the labeling process. Hence, our effort remains unsuccessful. Nevertheless, this is a system that has the potential of providing us with effective review quality score at industry scale. The system works end-to-end. The input is a set of reviews (no limitation in the number of reviews provided in the set) and the output is a csv file containing confidence score for each of the provided reviews. These scores can be used to find out characteristics of high, average and poor quality reviews; which in turn can aid software industries in coming up with proper guidelines for providing code reviews. This can save considerable time and cost by minimizing the occurrence of develop-review-develop cycles. Designing an effective industry-wide validation study can be an immediate next research step for ReviewRanker.
§.§ Limitations
ReviewRanker asks three questions regarding change type, code addition and code deletion while providing confidence score for a particular review. It does not use the context of code based on which the review has been provided. But we firmly believe that usage of code review context by the models for answering the three questions can greatly benefit the confidence score generation process. In such a case, sequence modeling approaches such as Long Short Term Memory (LSTM) <cit.> or Transformer <cit.> can be used as the three models of ReviewRanker. But one also has to take note of the fact that these sequence models are extremely data hungry. So, if a particular review set has less than 10K reviews (which is our case as well), then it is better to use the simple feature extraction method and model architecture that we have proposed. The three questions that we ask the developers to label for each sample are not based on any large scale study. We believe that a more optimal set of questions can be used for review quality estimation provided that a well designed large scale study is undertaken for this purpose. The reviews that we are dealing with in the experimental dataset for ReviewRanker are line-level code reviews. We have not tested the method on block-level code reviews, although we expect similar result for such case as well. Finally, because of the human bias factor, proper validation of the proposed ReviewRanker method could not be performed.
§ CONCLUSION
In this paper, we propose ReviewRanker with the goal of enabling effective inspection of code review quality. We discover the human bias factor of a supervised learning based approach and thus resort to a human-bias free multiple choice question scheme in order to indirectly get the confidence score for each review in a semi-supervised fashion. We ensure that the labeling process requires little to no effort from the developers. ReviewRanker can handle a large number of reviews (theoretically no limitation in number of reviews provided) and can provide the confidence score for each review in an end to end manner with zero external effort required. The proposed system can be implemented easily at industry level to consistently identify the best reviewers and promote the best review practices with minimal time and effort. The adoption of this system is expected to enhance code quality and to reduce the back-and-forth cycle of the review process. Some immediate future research directions are - (a) well designed industry scale evaluation of ReviewRanker effectiveness in review quality estimation, (b) incorporation of code context in ReviewRanker models and (c) replacing the current set of questions with more suitable set of questions through large scale study. We plan to make ReviewRanker publicly available in the form of a Python package upon acceptance.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.05077v1 | 20230711072330 | Wide dynamic range charge sensor operation by high-speed feedback control of radio-frequency reflectometry | [
"Yoshihiro Fujiwara",
"Motoya Shinozaki",
"Kazuma Matsumura",
"Kosuke Noro",
"Riku Tataka",
"Shoichi Sato",
"Takeshi Kumasaka",
"Tomohiro Otsuka"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Graduate School of Engineering, Tohoku University, 6-6 Aramaki Aza Aoba, Aoba-ku, Sendai 980-0845, Japan
WPI-Advanced Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980–8577, Japan
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Graduate School of Engineering, Tohoku University, 6-6 Aramaki Aza Aoba, Aoba-ku, Sendai 980-0845, Japan
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Graduate School of Engineering, Tohoku University, 6-6 Aramaki Aza Aoba, Aoba-ku, Sendai 980-0845, Japan
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Graduate School of Engineering, Tohoku University, 6-6 Aramaki Aza Aoba, Aoba-ku, Sendai 980-0845, Japan
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
[][email protected]
WPI-Advanced Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980–8577, Japan
Research Institute of Electrical Communication, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Graduate School of Engineering, Tohoku University, 6-6 Aramaki Aza Aoba, Aoba-ku, Sendai 980-0845, Japan
Center for Science and Innovation in Spintronics, Tohoku University, 2-1-1 Katahira, Aoba-ku, Sendai 980-8577, Japan
Center for Emergent Matter Science, RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan
Semiconductor quantum dots are useful for controlling and observing quantum states and can also be used as sensors for reading out quantum bits and exploring local electronic states in nanostructures.
However, challenges remain for the sensor applications, such as the trade-off between sensitivity and dynamic range and the issue of instability due to external disturbances.
In this study, we demonstrate proportional-integral-differential feedback control of the radio-frequency reflectometry in GaN nanodevices using a field-programmable gate array.
This technique can maintain the operating point of the charge sensor with high sensitivity.
The system also realizes a wide dynamic range and high sensor sensitivity through the monitoring of the feedback signal.
This method has potential applications in exploring dynamics and instability of electronic and quantum states in nanostructures.
Wide dynamic range charge sensor operation by high-speed feedback control of radio-frequency reflectometry
Tomohiro Otsuka
August 12, 2023
==========================================================================================================
Semiconductor quantum dots have been widely studied due to their ability to artificially control and observe quantum states <cit.>.
They can also be used as charge sensors by coupling them to a target system, allowing observation of the quantum state of the target quantum dots <cit.>.
This technique is also demonstrated by using quantum point contact and is useful for reading out quantum bits <cit.>.
Furthermore, such sensors are useful for exploring local electronic states in nanostructures and are an important tool for investigating material properties <cit.>.
It is always necessary to set the operating point of the charge sensor for high sensitivity, and there is an issue of instability of the operating point due to external disturbances, such as fluctuations of the charge states around the potential that forms the quantum dots <cit.>.
Feedback control that continuously monitors the state of the device is considered adequate to address this issue.
As an example of feedback control in quantum devices, the reduction of charge fluctuation in GaAs quantum dots has been reported using proportional-integral-differential (PID) feedback control <cit.>.
In addition to such feedback control, real-time processing of states in quantum dots has recently been demonstrated and is garnering attention for applications such as quantum bit operations <cit.>.
In such cases, field-programmable gate arrays (FPGAs) are used because central processing units (CPUs) are slow and cannot sufficiently compensate for fast fluctuations of the states.
FPGAs can operate at significantly faster speeds and allow for flexible and immediate changes to digital signal processing circuits through hardware programming.
Therefore, they are useful in measurement systems that require flexible specifications.
In order to advance the development of quantum information processing and sensors, it is important to construct measurement systems that combine the charge sensors with FPGAs.
The high-speed feedback control with FPGA is also useful to solve the existing challenges for the sensor applications of semiconductor quantum dots.
For example, the dynamic range of sensing is limited by the Coulomb peak width, resulting in a trade-off between sensor sensitivity and the dynamic range.
Here, we demonstrate the radio-frequency (rf) reflectometry in GaN nanodevices, which technique realizes high-speed readout to explore quantum dynamics <cit.>, and its PID feedback control is implemented by an FPGA.
We analyze the detailed behavior of the PID controllers for GaN nanodevices, including response speed and noise.
We also utilize the derivative term of the PID parameter enabling the fast feedback operation and show the fast response, which was not used in the previous report.
Analysis of the noise in the feedback signal reveals that it reflects the original noise behavior of the device, and we can detect the fluctuation by monitoring the feedback signal with keeping the optimal operation point.
From this insight, we demonstrate that the PID controller can achieve a wide dynamic range and high sensor sensitivity, which was previously a challenge.
We treat GaN/AlGaN heterostructures, which exhibit high electron mobility in two-dimensional electron gases, making them attractive materials not only for electronics applications such as high electron mobility transistors <cit.>, but also from the perspective of quantum devices <cit.>.
The wide and direct band gap in GaN offers the potential for the development of new quantum devices that operate at higher temperatures and can be coupled with light.
Despite the fact that GaN is attractive for quantum device applications, important techniques such as charge state readout using rf-reflectometry have not yet been reported.
Here, we demonstrate the rf-reflectometry in GaN nanodevices.
The schematic of the device structure is shown in Fig. <ref>(a) and (b).
A stack structure of GaN/Al_ 0.25Ga_ 0.75N (10nm)/SiN (30nm)/ SiO2 (50 nm) is grown by chemical vapor deposition on a silicon substrate.
The source and drain contacts are Ti/Al electrodes, and a gate is TiN with a length of 0.6 μ m.
The two-dimensional electron gas is formed at the interface between the GaN and AlGaN layers.
Fig. <ref>(c) shows a gate voltage V_ G and a source-drain bias voltage V_ SD dependence of a source-drain current I_ SD.
We note that all measurements in this letter are carried out at a temperature of 4.2 K.
We focus on the near pinch-off region, and I_ SD seems to be significantly suppressed.
This corresponds to Coulomb blockades of quantum dots formed by defects and/or impurities near the channel in the field effect transistor <cit.>.
Figure <ref>(d) shows the numerical derivative of I_ SD.
Coulomb diamonds are observed, indicating the formation of quantum dots in our device.
The diamonds are not completely closed at V_ SD=0 in Fig <ref>(a), indicating that multiple quantum dots are formed in this device.
Broad measurement bandwidth is required for the feedback system to show enough performance.
In the case of the typical direct current measurement, such as we used to check the basic properties of the device, the bandwidth is limited by stray capacitances in the measurement setup.
In order to improve the bandwidth, rf-reflectometry is one of the powerful techniques.
This technique has been demonstrated in quantum point contacts and dots in gallium arsenide <cit.>, Si/Ge <cit.>, and graphene devices <cit.>.
We construct the rf-reflectometry setup with the feedback system as shown in Fig. <ref>(a).
The input rf signal is applied to the resonator through the phase shifter and the directional coupler.
The resonator is constructed by an inductor L=1.2 μH and stray capacitance C formed in the measurement board and the device.
The reflected rf signal is amplified and demodulated using the local signal, and the rectified voltage V_ rf is sampled by a Keysight M3300A digitizer.
The sampling rate is 100 MS/s.
We check the resonator characteristics as shown in Fig. <ref>(b), and there is a clear sensitive point of V_ rf to V_ G.
To achieve maximum sensitivity, we set the rf frequency to 109.5 MHz and optimize the phase shift of the input rf signal.
In this resonance circuit, the resonator is sensitive to the G ∼ 70 μS, which satisfies the impedance matching condition.
By designing the device structure, it is possible to adjust the impedance matching condition to around G ∼ 20 μS, which is observed in quantum dots.
In this case, we focus on the simple pinch-off state showing the large conductance changes as shown in Fig. <ref>(c).
This behavior is similar to that of a quantum point contact.
We set the measurement integration time at 10 μ s to reduce the noise in Fig. <ref>(c).
An impedance-matching condition is satisfied at V_ G=-16 V where V_ rf=0.
The phase between the reflected and local signal is inverted here, resulting in the inversion of the sign of V_ rf.
We describe our feedback system using the PID controller.
Figure <ref>(a) shows the block diagram of the feedback loop.
The output of the PID controller (control signal u(t)) is expressed as
u(t) = K_ P e(t)+K_ I∫ e(t) dt + K_ D de(t)/ dt,
e(t) = V_ s-V_ rf(t),
where, t, V_ s, K_ P, K_ I, and K_ D are the continuous-time, the set point voltage, coefficients for the proportional, integral, and derivative terms, respectively.
In order to treat this system in the discrete-time n domain, the Laplace transformation and the bilinear transformation are aaplied to Eq. <ref>.
The transfer function of the PID controller H(z) is described as
H(z) = G_ P + G_ I1+z^-1/1-z^-1 + G_ D1-z^-1/1+D z^-1,
where, G_ P = K_ P, G_ I = K_ IT_ S/2, G_ D=2K_ D/T_ S+2T_ F, D=T_ S-2T_ F/T_ S+2T_ F, T_ S is the sampling time, and T_ F is the time constant of a differentiator, respectively.
From H(z), the output of the PID controller in n domain u(n) can be obtained as
u(n) = u_ P(n) + u_ I(n) + u_ D(n) + u(n-1),
u_ P(n) = G_ P e(n),
u_ I(n) = G_ I[e(n)+e(n-1)] + u_ I(n-1),
u_ D(n) = G_ D[e(n)-e(n-1)] - Du_ D(n-1),
e(n) = V_ s-V_ rf(n).
The proportional, integral, and derivative terms are processed in parallel.
In addition, we use the digital low-pass filter and averaging process before the input of the PID controller in order to reduce the aliasing.
The fastest I/O latency of the M3300A is 100 ns.
Figure <ref>(b) shows the response of our feedback system.
A synthesized step disturbance V_ n is applied to V_ G, where V_ G=-16.03 V and V_ s=0.
This is intended for electrostatic potential fluctuation by defects and/or impurities near the channel, which acts as the effective gate voltage.
Then, the PID controller operates to compensate for V_ n, resulting in the stabiliztion of V_ rf.
Here, we set the PID parameters G_ P=0.10, G_ I=0, G_ D=0.65, D = 0.80, T_ F=5 ns and T_ S=90 ns, respectively.
These time traces are averaged by 1000 trials.
When V_ G is shifted by V_ n, V_ rf is rapidly returned and stabilized at V_ s by the PID output u(n).
The PID controller operates successfully.
One of the differences from the previous report is the setting of parameters.
We use mainly proportional and derivative terms, while the previous study adapted proportional and integral terms <cit.>.
The derivative term suppresses an overshoot induced by K_ P and contributes to the fast response.
In the case of our system, we do not use the integral term because u oscillates by the term when the PID controller has a high-speed clock of over 10 MHz.
We define the fall time as the time between V_ n input timing and the time for V_ rf to reach 90% of the maximum changes, and the rise time τ as the time to return 10% of the maximum changes.
The fall-time is mainly due to the system I/O latency.
Under this condition, we achieve τ = 1.97 μs, faster than in the previous study.
Figure <ref>(c) shows the disturbance amplitude Δ V dependence of τ at V_ s=0 and 16 mV.
We use the same PID parameters in all measurement conditions, and the feedback control is nicely operated in micro-second order even the condition is changed.
τ increases with increasing Δ V for V_ s=0 while it decreases for V_ s = 16 mV.
Because the PID parameters are optimized at V_ s=0 and Δ V=10 mV, τ increases when the condition changes away from this.
As demonstrated, the feedback control might not always perform optimally, indicating that it would be better to choose optimum parameters for each V_ s in order to improve performance.
Our PID controller is easy to tune the parameter because of the use of the proportional and derivative processes.
We also investigate the noise power spectral density (PSD) of V_ rf with the PID controller and its output u.
Figure <ref>(a) shows the PSD of V_ rf with the PID on/off and u at V_ s=0.
The bandwidth is limited by a digital low-pass filter of 5 MHz.
At a glance, the low-frequency noise including the flicker noise below 100 kHz is significantly suppressed by the PID control.
This cut-off frequency of the PID controller corresponds to 1/τ we observed in Fig. <ref>.
We also focus on the PSD of u, and it shows a similar behavior of V_ rf with the PID off, which means that we can probe and track the noise information slower than the feedback control by monitoring u with keeping V_ rf stabilized.
Figure <ref>(b) shows the readout deviation σ of V_ rf and u.
While σ_V rf is almost constant by changing V_ s, σ_u depends on V_ s.
The flicker noise contributes to σ_u and is proportional to | dV_ rf/ dV_ G|.
Therefore, the V_ s dependence of σ_u reflects | dV_ rf/ dV_ G|.
Note that the PID parameters are optimized at V_ s=0, and its response deteriorates away from this optimal point as shown in Fig. <ref>(c), which may also affect σ_u.
This behavior agrees well with previous findings <cit.>.
On the other hand, the background of σ_V rf is larger than that of σ_u.
The PSD of V_ rf exhibits a slight increase in a frequency domain higher than the feedback control frequency of 100 kHz, which appears to increase σ_V rf independent of V_ s.
Finally, we demonstrate the wide-range charge sensing by the PID controller.
Quantum dots can be utilized as charge detection sensors, and they can probe local electronic states in nanostructures <cit.>.
Due to the electrostatic coupling between the quantum dots and local electronic states, the changes in the charge states act as effective V_ G.
For the charge sensor application, the sensor state should be stabilized.
In addition, the saturation region of V_ G dependence of the V_ rf has no sensitivity to the charge state, indicating that the available sensing range is limited by the width of the region that has a slope of V_ rf to V_ G.
While a large electrostatic coupling leads to a large signal that detects the charge state, the signal could exceed the sensitive region of V_ rf.
The PID controller solves these concerns by monitoring the signal through the feedback control.
We applied the sinusoidal signal as the simulated charge state change, where we set the V_ s=0.
The schematic of the measurement is shown in Fig. <ref>(a).
At first, we monitor V_ rf with PID off as shown in Fig. <ref>(b).
It can be seen that the signal saturates around V_ rf∼± 20 mV, and we cannot probe any charge states in these regions.
Next, we monitor u with the PID on.
Clearly seen in the result, the charge state is perfectly tracked by monitoring u with keeping the operation point stabilized.
The PID controller serves as a sensitive and wide-dynamic-range probe for the local electronic states in nanostructures.
A tracking range is limited by the maximum range of u, which is 1.5 V.
This value is greater than the range of detection with the PID off.
The PID controller is expected not only to stabilize the quantum dots but also to be a high-performance probing tool for local electronic states.
In conclusion, we demonstrate the rf-reflectometry in GaN nanodevices, which enables high-speed readout for investigating quantum dynamics.
We implement PID feedback control by FPGAs and analyze the behavior of the PID controller, including response speed and noise PSD.
By utilizing the derivative term in the PID, we achieve the fast response.
The PID control significantly suppresses low-frequency noise, including flicker noise below 100 kHz, and PSDs of V_ rf with the PID off and u show similar behavior.
This result allows us to detect the change in charge states by monitoring u while keeping the optimal operating point.
Consequently, we demonstrate that PID controllers successfully enable both a wide dynamic range and high sensor sensitivity.
The present system and operation are useful for applications in quantum dots and exploration of the electronic and quantum states in nanostructures.
The authors thank N, Ito, T, Tanaka, K, Nakahara, and RIEC Fundamental Technology Center and the Laboratory for Nanoelectronics and
Spintronics for fruitful discussions and technical support.
Part of this work is supported by MEXT Leading Initiative for Excellent Young Researchers,
Grants-in-Aid for Scientific Research (21K18592, 23H01789, 23H04490),
Rohm Collaboration Project,
Fujikura Foundation Research Grant,
Tanigawa Foundation Research Grant,
Maekawa Foundation Research Grant,
The Foundation for Technology Promotion of Electronic Circuit Board,
Iketani Science and Technology Foundation Research Grant,
and FRiD Tohoku University.
|
http://arxiv.org/abs/2307.07288v1 | 20230714115947 | Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion | [
"ShangQi Deng",
"RuoCheng Wu",
"Liang-Jian Deng",
"Ran Ran",
"Tai-Xiang Jiang"
] | cs.CV | [
"cs.CV"
] |
Both authors contributed equally to this research.
shangqideng0124@gmail
[1]
School of Mathematical Sciences, University of Electronic Science and Technology of China
Chengdu
China
611731
Corresponding author
[email protected]
School of Mathematical Sciences, University of Electronic Science and Technology of China
Chengdu
China
611731
[email protected]
School of Mathematical Sciences, University of Electronic Science and Technology of China
Chengdu
China
611731
[email protected]
Southwestern University of Finance and Economics
Chengdu
China
611130
Multispectral and Hyperspectral Image Fusion (MHIF) is a practical task that aims to fuse a high-resolution multispectral image (HR-MSI) and a low-resolution hyperspectral image (LR-HSI) of the same scene to obtain a high-resolution hyperspectral image (HR-HSI). Benefiting from powerful inductive bias capability, CNN-based methods have achieved great success in the MHIF task. However, they lack certain interpretability and require convolution structures be stacked to enhance performance. Recently, Implicit Neural Representation (INR) has achieved good performance and interpretability in 2D tasks due to its ability to locally interpolate samples and utilize multimodal content such as pixels and coordinates. Although INR-based approaches show promise, they require extra construction of high-frequency information (e.g., positional encoding). In this paper, inspired by previous work of MHIF task, we realize that HR-MSI could serve as a high-frequency detail auxiliary input, leading us to propose a novel INR-based hyperspectral fusion function named Implicit Neural Feature Fusion Function (INF³). As an elaborate structure, it solves the MHIF task and addresses deficiencies in the INR-based approaches. Specifically, our INF³ designs a Dual High-Frequency Fusion (DHFF) structure that obtains high-frequency information twice from HR-MSI and LR-HSI, then subtly fuses them with coordinate information. Moreover, the proposed INF³ incorporates a parameter-free method named INR with cosine similarity (INR-CS) that uses cosine similarity to generate local weights through feature vectors. Based on INF³, we construct an Implicit Neural Fusion Network (INFN) that achieves state-of-the-art performance for MHIF tasks of two public datasets, i.e., CAVE and Harvard. The code will soon be made available on GitHub.
INF³: Implicit Neural Feature Fusion Function for Multispectral and Hyperspectral Image Fusion
Tai-Xiang Jiang
August 12, 2023
==============================================================================================
§ INTRODUCTION
Hyperspectral imaging involves capturing a scene in various contiguous spectral bands. Compared to traditional single or few-band images (such as those with RGB channels), hyperspectral (HS) images provide finer information about real observations and thus better characterize image scenes. As a result, HSIs have found wide application in different areas of computer vision and have improved the accuracy of several tasks, such as object recognition, classification, tracking, and segmentation <cit.>. However, practical optical sensor systems face limitations in incident energy, necessitating tradeoffs between spatial resolution and spectral refinement. In particular, hyperspectral (HS) images with more than 100 bands often have a relatively low spatial resolution, while multispectral (MS) images with a limited number of bands have a relatively high spatial resolution. Therefore, exploring the fusion of a high spatial resolution multispectral image (HR-MSI) and a low spatial resolution hyperspectral image (LR-HSI) of the same scenario into a high spatial resolution hyperspectral image (HR-HSI) has attracted increasing attention. The aim is to obtain as rich and precise HR and HS data as possible.
In recent times, the CNN-based method has achieved considerable success due to its remarkable ability to extract advanced features when applied to multispectral and hyperspectral image fusion. Researchers have demonstrated that the two-stream fusion network designed for HR-MSI and MR-HSI is bounded by the two-stream fusion network for them. To maintain both spatial and spectral information, existing work attempts to design attention modules that produce high-quality spatial details. However, most existing networks are based on a generic CNN framework, which lacks interpretability for MHIF tasks.
Motivated by recent advancements in Implicit Neural Representation (INR) for 3D object/scene representation <cit.> and image super-resolution <cit.>, we propose to re-examine the fusion process from a different perspective. INR involves mapping continuous spatial coordinates to signals in a domain through an implicit function. In order to obtain prior information from different scenes and integrate it with the implicit function, an existing encoder is typically employed to extract the latent code from the scene/imagery. For 2D tasks, the implicit function usually takes a weighted average of a fixed number of neighboring latent codes to ensure output value continuity. However, due to the lack of sufficient prior information across neighboring coordinates, the weights of such implicit interpolation are commonly dependent on area <cit.> or network parameters <cit.>, which limit performance or interpretability. Thus we generate fusion weights using parameter-free cosine similarity solving of the latent code. Additionally, the MLP-ReLU structure used by INR has inherent high-frequency information bias <cit.> that is not easily eliminated during training. Therefore, we propose aligning HR-MSI and LR-HSI images to extract high-frequency information in a multiscale and multimodal manner. Finally, we integrate the learning framework of weight generation and image fusion into a unified implicit function, called the implicit neural feature fusion function (INF³) representation.
The contribution of this paper is listed as follows:
* We propose an Implicit Neural Feature Fusion Function (INF³), which is the first attempt that applied Implicit Neural Representation (INR) on Multispectral and Hyperspectral Image Fusion (MHIF) task. In the fusion stage, we only utilize an MLP layer, which reduces the burden brought by massive use of convolution.
* To enrich the network's input, our INF³ adopts the practice of Dual High-Frequency Fusion (DHFF) structure across three modalities which combines high-frequency spatial information at different resolutions. Concretely, this method allows the MLP layer in INF³ to access more high-frequency information for detail recovery.
* The proposed INR with cosine similarity (INR-CS) method utilizes cosine similarity to generate weights that makes better use of information inside the pixel rather than distance or area. The proposed method does not depend on any extra parameters or network structures. Instead, it generates parameters based on cosine similarity between feature vectors and fuses local information.
* Based upon INF³, we construct an Implicit Neural Fusion Network (INFN) using encoder-decoder architecture. The proposed INFN has achieved state-of-the-art performance on two public datasets, i.e., CAVE and Harvard. Specifically, the proposed decoder has a lightweight structure yet prevents overfitting of INR structures on MHIF tasks.
§ RELATED WORK
§.§ CNNs in MHIF
Recently, CNN-based techniques have shown significant success in multispectral and hyperspectral image fusion (MHIF) due to their capacity to learn high-level features from input data through end-to-end training. Among these methods, SSRNet <cit.> uses three convolution modules—fusion, spatial edge, and spectral edge—to restructure the image, with a loss function connected to the spatial and spectral edges ensuring training reliability. Similarly, ResTFNet <cit.> utilizes residual structures and a two-stream fusion network to learn input data from different modalities, inspired by the widespread application of ResNet <cit.> in image super-resolution. MHF-net <cit.>, on the other hand, was specifically designed for the MS/HS fusion task, incorporating a well-researched linear mapping that links the HR-HSI image to the HR-MSI and LR-HSI images, as well as clear interpretability. Meanwhile, MoG-DCN <cit.> builds a dedicated sub-network for approximating the degradation matrix and leverages DCN-based image regularization <cit.> for HISR, fully exploiting prior HSI knowledge. For simultaneous extraction of spatial and spectral information and production of high-quality details, HSRnet <cit.> employs channel and spatial attention modules. To ensure bidirectional data consistency and improve accuracy in both spatial and spectral domains, DBIN <cit.> proposes a deep learning-based approach that optimizes the observation model and fusion procedures repeatedly and alternately during reconstruction. Finally, while CNN has a strong structure, INR-based approaches have shown tremendous potential for both 3D and 2D tasks.
§.§ Implicit Neural Representation
Recently, implicit representations of 3D objects, scenes and shapes have gained significant momentum in research. Traditional discrete explicit representations have been partly replaced by implicit neural representations (INR), which use parameterized MLPs to map coordinate information into signals (coor-MLP) in the target domain. For example, NeRF <cit.> expanded the input 3D coordinate to a continuous 5D scene representation with a 2D viewing direction, resulting in better renderings of high-frequency scene content than explicit 3D representations such as voxel methods, point cloud, and mesh. DeepSDF <cit.> takes a 3D coordinate and a categorical latent code as input and outputs the signed distance (SDF) at this coordinate to determine whether it is inside the target shape. Related works have enhanced INR's ability to model 3D surfaces and shapes <cit.>. This approach has also been extended to the 2D domain, for example, Local Implicit Image Function (LIIF) <cit.> extracts a set of latent codes distributed in the LR domain to interpolate the HR target image. Based on LIIF, UltraSR <cit.> attempts to apply residual structure to the 2D INR process and add the multiple injection of coordinate information and residual structure. Furthermore, LTE <cit.> proposes a local texture estimator to characterize the image information into the Fourier domain and incorporate it with the coordinate information as input to the MLP. SIREN <cit.> proposes an overall implicit neural representation framework to adapt the complex natural signals and their derivatives using a periodic activation function. CRM <cit.> performs image segmentation refinement using implicit neural representations. When applied to processing multimodal data, JIIF <cit.> proposes using INR to reconstruct depth images in the HR domain by using LR domain RGB images guided with noisy low-resolution depth images. This work strongly inspired us to use INR to process multispectral and hyperspectral Image Fusion. However, previous work has demonstrated the limitations and biases of the MLP-ReLU structure in learning high-frequency information <cit.>. Therefore, we focus on designing strategies for the fusion process of different modes to improve the performance of high-frequency representation and add a decoder after the MLP layer to correct the bias.
§.§ Motivation
LR-HSI and HR-MSI provide abundant spectral and spatial information, respectively, making them a valuable resource for image analysis. However, fully utilizing the local content of these images and fusing information from different modalities, such as spatial, spectral, and coordinate, are challenging. To address this issue, we propose an implicit neural fusion network (INFN) that relies on the implicit neural representation (INR) of the image. The INR-based approaches have demonstrated exceptional performance in arbitrary-scaled image super-resolution tasks <cit.>, frequently employing a multilayer perceptron (MLP) as the fusion component. However, MLPs tend to acquire low-frequency information, necessitating additional input of high-frequency data, such as position or frequency encoding <cit.>. To overcome this limitation, we introduce the implicit neural feature fusion function (INF³). Inspired by the multiscale injection branch of SSconv <cit.>, in INF³ we inject detailed high-frequency information in dual scales, specifically using MLPs to learn high-frequency data for MHIF task. Additionally, we address the challenge of identifying feature vectors that are close in distance but different in angle by proposing that our INF³ utilizes cosine similarity between feature vectors to compute coefficients. In detail, we utilize full-size and reduced-size HR-MSI to generate interpolated weights, eliminating the need for network learning or additional parameters. As a result, our fusion framework has demonstrated state-of-the-art performance on two publicly available datasets.
§ METHODOLOGY
In this section, we present our INF³ representation designed for the MHIF task. We first introduce the overall architecture of our implicit neural fusion network (INFN) in Sec. <ref>. Subsequently, we review recent implicit neural representations (INR) for 2D tasks in Sec. <ref>. Finally, we describe the design of INF³ in Sec. <ref> for the fusion process.
§.§ The Overall Architecture
As shown in Fig. <ref>, the INFN is generally divided into two segments: encoder and decoder. In practice, it is evident that directly applying an INR-based approach to address MHIF tasks often leads to overfitting. To overcome this challenge and ensure network stability during training, we have opted for an encoder-decoder architecture. Supplementary materials will include relevant ablation experiments. Specifically, the encoder stage can be formulated as follows:
ℰ= Encoder(𝒳,𝒴,𝒞),
where ℰ∈ℝ^H× W× D represents the fusion result of the encoder, 𝒳∈ℝ^h× w× S denotes the LR-HSI, 𝒴∈ℝ^H× W× s denotes the HR-MSI, and 𝒞∈ℝ^H × W × 2 is the normalized 2D coordinate map in the high resolution (HR) domain. In detail, we propose to represent a pixel by its center position and scale the coordinate map of H× W into the square grid of size [-1,1]× [-1,1] to make it convenient to share the coordinates in both the HR and LR domains. The normalization process in HR domain can be formulated as:
𝒞(i,j)=[-1+2i+1/H,-1+2j+1/W],
where i∈[0,H-1], j∈[0,W-1]. To deal with information about different modes, i.e., LR-HSI and HR-MSI, we utilize function F_ψ and function F_ϕ to extract the spatial and spectral information, respectively. The process of spectral function can be formulated as follows:
𝒮_pe= F_ϕ(𝒳),
where 𝒮_pe∈ℝ^h× w× D_1 is the feature map of spectral modality and ϕ is learnable parameters of spectral function. 𝒟_1 is the number of output channels of the spectral function. To extract information from spatial modality, we propose to concatenate bicubic interpolated LR-HSI 𝒳^U ∈ℝ^H× W× S with the HR-MSI 𝒴∈ℝ^H× W× s, thus inputting it into the spatial function f_ϕ for extracting. In special, this process can be expressed by the formula:
𝒮_pa= F_ψ( Cat(𝒳^U,𝒴)),
where 𝒮_pa∈ℝ^H× W× D_2 is the feature map of spatial modality and ψ is learnable parameters of spatial function. 𝒟_2 is the number of output channels of the spatial function. In addition, Cat(·) means the concatenation operation in channel dimension. We view the INF³ framework as the key of encoder, which can be formulated as:
ℰ = INF^3(𝒮_pe,𝒮_pa, 𝒞).
For the decoding process, we work on the encoding output ℰ∈ℝ^H× W × C to generate the decoding result 𝒟∈ℝ^H× W × S via a two-layer convolution structure. The parameters of the decoder are shared by all training patches. In general, the neural network tends to predict frequencies located near a low frequency region. Yet, past work has proved that a long skip connection in local implicit representation enriches high-frequency components in residuals and stabilizes convergence <cit.>. Thus, we add the bicubic interpolation LR-HSI 𝒳^U as a long skip connection to ameliorate the above problem. Thus, the final signal take the form:
𝒳̃ = Decoder(ℰ) + 𝒳^U.
§.§ Implicit Neural Representation
In this session, we will introduce the implicit neural representation (INR) from the perspective of interpolation method. Given a low-resolution image x ∈ℝ^h× w× 3 and the corresponding high-resolution (HR) interpolated image x̂∈ℝ^H× W× 3 as an example, the INR up-sampling process at position C_q can be expressed as:
x̂(C_q)=∑_i∈𝒩_qw_q,iv_q,i,
where C_q∈ℝ^2 is the normalized coordinate of the query pixel in the HR domain, 𝒩_q ∈ℝ^4 is the coordinate of neighbor pixels for C_q in the LR domain, w_q,i∈ℝ is the interpolation weight of v_q,i∈ℝ^1× 1× 3, and v_q,i is the vector of x. The interpolation weights are usually normalized so that ∑_i∈𝒩_qw_q,i=1. Previous work usually proposes to set 𝒩_q to the pixels at the four nearest centers of C_q in the LR domain. The calculation of the interpolation weights varies from articles to articles, and the simplest formulation of area weight interpolation used by LIIF <cit.> is as follows:
w_q,i=A_i/A,
where A_i is the partial area diagonally opposite to the i corner pixel, A = ∑_i∈𝒩_qA_i is the total area serving as the denominator. In detail, the LIIF fuses LR pixel information with HR relative coordinate information through MLP to generate the interpolation value v_q,i, which takes the following form:
v_q,i=MLP_Θ(x(C_i),C_q-C_i),
where MLP_Θ(·) is an MLP layer with learnable parameters Θ that takes a local feature vector x(C_q) in the LR domain and a relative coordinate C_q-C_i as inputs. From the above equations, the interpolated vector can be represented by a set of local feature vectors in the LR domain, which stores the low-resolution information of the local region. In general, INR-based methods implement up-sampling by querying x(C_q) with the relative query coordinate C_q-C_i in the arbitrary super-resolution task.
§.§ Implicit Neural Feature Fusion Function
The objective of the MHIF task is to fuse the different modal inputs of LR-HSI and HR-MSI, resulting in the generation of HR-HSI with high spectral and spatial resolution. Previous fusion techniques usually construct two separate CNN branches for the LR-HSI and HR-MSI <cit.>, and then extract sets of CNN features <cit.>. However, the CNN-based fusion methods are significantly dependent on stacking convolution structures and lack interpretability. Inspired by the recent developments in INR <cit.>, we propose a multimodal and multiscale fusion function based on the INR framework, and use parameter-free weight generation method to facilitate the mining of high-frequency information in the fusion process. In summary, we innovatively create the Implicit Neural Feature Fusion Function (INF³) to guide the fusion process. Unlike the LIIF <cit.> representation, which directly generates the predicted signal, our INF³ is designed to generate the fused feature map ℰ∈ℝ^H× W× C, and then use a decoder structure to work out our final output 𝒳̃. Specifically, the fused feature map ℰ at position C_q can be represented as follows:
ℰ_q=∑_i∈𝒩_qw_q,iℱ_q,i,
where 𝒩_q indicates the set of the four nearest query coordinates around C_q in the normalized HR domain, w_q,i and ℱ_q,i are the weights and multimodal fusion information of query coordinate C_q at position C_i, respectively. Typically, we regard ℱ_q,i∈ℝ^1× 1× C as the fused feature vector at position C_i when querying coordinate C_q. In the following passage, we will introduce how to generate ℱ_q,i and w_q,i.
Dual high-frequency fusion:
We observe that different resolution information plays an important role in MHIF tasks. To this end, we design a structure, i.e., dual high-frequency fusion (DHFF), which combines high-frequency spatial information at different resolutions. Firstly, we concatenate the LR domain spatial information 𝒮_pa^D ∈ℝ^h× w× D_2 and spectral information 𝒮_pe∈ℝ^h× w× D_1 vectors at position C_i, which can be formulated as follows:
ℱ_q,i^1=Concat(𝒮_pe(C_i),𝒮_pa^D(C_i)),
where ℱ_q,i^1 ∈ℝ^1× 1× (D_1+D_2) is the fusion of spectral information and spatial information at the same resolution. Specifically, we generate LR domain high-frequency spatial information, i.e., 𝒮_pa^D by the following formula:
𝒮_pa^D=Mean(𝒮_pa).
Given the up-sampling ratio is r, we operate mean operation on the r × r region of 𝒮_pa∈ℝ^H× W× D_2 and then get the 𝒮_pa^D ∈ℝ^h× w× D_2. This design is to incorporate the LR domain high-frequency information, to better serve the MLP layer. In Sec. <ref>, we will design relevant ablation study and verify the effectiveness of this design. Secondly, we combine HR domain information 𝒮_pa∈ℝ^H× W× D_2 at position C_q with LR domain fusion information ℱ_q,i^1. The process above can be expressed as:
ℱ_q,i^2=Concat(ℱ_q,i^1,𝒮_pa(C_q)),
where ℱ_q,i^2 ∈ℝ^1× 1× (D_1+D_2+D_2) serves as the result of DHFF. In general, our DHFF naturally combines feature vectors of different modalities and different scales with the aim of making the MLP acquire both spatial and spectral information. Similar to the previous INR-based work, we obtain the coordinate modal information by adding the relative positions of C_q and C_i to the fusion process, which can be represented as:
ℱ_q,i^3=Concat(ℱ_q,i^2,C_q-C_i),
where ℱ_q,i^3 ∈ℝ^1× 1× (D_1+D_2+D_2+2) is the result after adding the relative distance information of interpolation on the basis of ℱ_q,i^2. Finally, we utilize an MLP layer to learn the information in F_q,i^3 and get the following expression:
ℱ_q,i=MLP_Θ(ℱ_q,i^3),
where ℱ_q,i∈ℝ^1× 1× C is multimodal fusion information of query coordinate C_i when querying coordinate C_q, MLP_Θ(·) is a fully connected layer, and Θ serves as its learnable parameters.
Cosine similarity: The proposed INR with cosine similarity (INR-CS) method generates weights based on cosine similarity. In Eq. (<ref>), w_q,i∈ℝ is the weight at the position C_i when querying the coordinate C_i. Part of the previous work viewed the generation of this weight simply as a solution to the interpolation problem, using area-based method to generate the target weights <cit.>, which ignores local texture and information about the data itself. The other part of the work proposes to learn the weights by network parameters, i.e., learning similar weights by graph attention mechanisms <cit.> which lacks of interpretability. In order to utilize information about ℱ_q,i while keeping interpretability, we propose a parameter-free approach named INR-CS as follows:
w_q,i=exp(‖ℱ_q,q^1‖·‖ℱ_q,i^1‖⟨ℱ_q,q^1, ℱ_q,i^1⟩)/w_q,
where
w_q = ∑_i∈𝒩_qexp(‖ℱ_q,q^1‖·‖ℱ_q,i^1‖⟨ℱ_q,q^1, ℱ_q,i^1⟩).
ℱ_q,i^1 is given by Eq. (<ref>), where the q is the closest point to C_q in the LR domain. In detail, ℱ_q,q^1=Concat(𝒮_pa^D(q),𝒮_pe(q)), and ⟨·,·⟩ represents the cosine operation between vectors. The similarity between feature vectors is normalized by the softmax function.
§ EXPERIMENT
Datasets:
Following the previous studies, we conduct experiments to evaluate our model on the CAVE [<https://www.cs.columbia.edu/CAVE/databases/multispectral/>]and Harvard[<http://vision.seas.harvard.edu/hyperspec/index.html>] datasets. In detail, the CAVE dataset contains 32 HSIs with 31 spectral bands ranging in wavelengths from 400 nm to 700 nm in increments of 10 nm. We randomly select 20 images for training, and the remaining 11 images make up the testing dataset. In addition, the Harvard dataset includes 77 HSIs of both indoor and outdoor scenes, with each HSI having a size of 1392 × 1040 × 31 and spanning the 420 nm to 720 nm spectral range. We chose 20 of them and crop the upper left portion (1000 × 1000), with the 10 images being utilized for testing and the remaining 10 were used for training.
Data Simulation:
We input LR-HSI and HR-MSI (𝒳, 𝒴) pairs into the end-to-end network, and use HR-HSI 𝒳̅ for training. Due to ground-truth (GT) 𝒳̅ is not available in real life, a simulation process is thus required. As for CAVE dataset, we crop the 20 selected training images to generate 3920 overlapping patches with the dimension 64× 64× 31, and this patches will serve as GT 𝒳̅. In order to generate the proper LR-HSIs, we use a 3× 3 Gaussian kernel with a standard deviation of 0.5 to blur the initial HR-HSIs and downsample the blurred patches with a scaling factor of 4. Additionally, we utilize the common spectral response function of the Nikon D700[<https://www.maxmax.com/nikon_d700_study.htm>] camera and HR-HSIs to create the HR-MSI patches. Thus, we generate 3920 LR-HSIs with a size of 16 × 16 × 31 and HR-MSIs with a size of 64 × 64 × 3 form the input pairs (𝒳, 𝒴). Following that, the inputs pairs and associated GTs are divided at random into training data (80%) and testing data (20%). To create the input LR-HSI and HR-MSI products as well as the GTs, this method is also applied to the Harvard dataset.
Benchmark: To verify the superiority of the proposed INF³, we compare it with various state-of-the-art methods including MTF-GLP-HS <cit.>, CSTF-FUS <cit.>, LTTR<cit.>, LTMR<cit.>, IR-TenSR<cit.>, DBIN <cit.>, SSRNet <cit.>, ResTFNet <cit.>, HSRNet <cit.>, MoG-DCN <cit.>, Fusformer <cit.> and the DHIF <cit.> network. In specific, the upsampled LR-HSI in Fig. <ref> is the bicubic-interpolated result, which is added to the experiment as a baseline. By the way, all the deep learning approaches are trained with the same input pairs for a fair comparison. Moreover, the related hyperparameters are selected consistent with the original papers.
Implementation Details:
The proposed network implements in PyTorch 1.11.0 and Python 3.8.0 using Adam optimizer<cit.> with a learning rate of 0.0001 to minimize sum of absolute difference ℒ_1 by 1000 epochs and Linux operating system with a NVIDIA RTX3080 GPU (12GB).
Results on CAVE Dataset: In this section, we evaluate the effectiveness of our proposed INF³ method on the CAVE dataset (scaling factor of 4) and compare it with existing MHIF methods. As shown in the left part of Tab. <ref>, our INF³ outperforms other state-of-the-art deep learning models by a large margin. For instance, our INF³ improves PSNR by 1.31 dB, 2.40 dB, 0.75 dB, and 2.00 dB compared with DHIF <cit.>, Fusformer <cit.>, MoG-DCN <cit.>, and HSRNet <cit.>, respectively. The proposed INF³ achieves significant improvements in two QIs, i.e., SAM and ERGAS. In particular, our INF³ improves ERGAS by 11.71% and 18.33%, compared with the second and third best models. In addition, our INF³ outperforms MoG-DCN <cit.> and DHIF <cit.> on SAM and has only two-fifths and one-seventh of their parameters. Moreover, to aid in visual verification, we provide pseudo-color depictions of the fused products and some error maps in Fig.<ref>. It can be observed that the generated results of our INF³ are very close to the ground truth and maintain better reconstruction quality with more accurate textures. Regarding the absolute error maps in Fig.<ref>, the closer the reconstruction impact is to the original picture, the more blue the error map's color is. It is evident that INF³ restores texture details better than the other techniques under comparison, which is consistent with the analysis in Tab. <ref>.
Results on Harvard Dataset: Fig.<ref> displays 10 test images from the Harvard dataset. Moreover, the right-hand portion of Tab.<ref> presents the comparison results of five indices obtained by all compared methods on another hyperspectral image dataset, namely Harvard, for a scaling factor of 4. It is evident that the average PSNR value of our proposed INF³ is higher by 0.17 dB and 0.51 dB compared to the second-best and third-best methods, respectively. Although our model is slightly inferior to the second-best MoG-DCN <cit.> in terms of SAM, our model's parameters are only two-fifths of MoG-DCN's. Moreover, our model achieves the best results on ERGAS and SSIM, indicating the best structural recovery. Furthermore, Fig. <ref> illustrates that our proposed INF³ is capable of reconstructing the detailed structure of the original image. Notably, our method restores the finest details of the bike, the metallic sheen, and the texture of the backpack. These error maps also demonstrate that our proposed INF³ achieves the best fidelity in terms of texture details. Additionally, the fact that our residuals are closer to blue indicates that our recovery is better than other methods.
§.§ Ablation Study
In this section, we profoundly discuss the effectiveness of dual high-frequency fusion (DHIF), which combines LR and HR domain in the INF³. Our primary concern is whether injecting relative location information can aid the network in image recovery. Therefore, we conducted an ablation study to assess this. Furthermore, we included the proposed weight generation method in the ablation study. To maintain brevity and generality, the analysis is conducted on the CAVE dataset.
1) Dual high-frequency fusion: To evaluate the effectiveness of dual-high-frequency information injection, we conducted several experiments. As shown in Tab. <ref>, we found that the removal of high-frequency information injection in HR domain resulted in a significant decline in the performance of INF³. This indicates that high-resolution and high-frequency information provides more detailed information during the fusion process of INF³. Moreover, the performance of INF³ slightly decreased when LR domain high-frequency information injection was removed, suggesting that high-frequency information of LR domain plays a supportive role in the fusion process. The utilization of different resolution information resulted in the best performance for our INF³. The importance of information at various resolutions for MHIF tasks inspired us to design this structure, and the experiments supported the rationality behind this design.
2) Relative coordinate : In this section, we will analyze the effectiveness of the relative coordinate C_q-C_i in INF³. The relative coordinate and pixels belong to different modalities, where the former represents the distance of interpolation, and the latter represents the value of interpolated value. We are curious whether the information from different modalities can aid the MLP in understanding the fusion and interpolation processes in INF³. To address this, we conducted an ablation experiment to eliminate our confusion. Specifically, we removed the relative coordinate from INF³ while keeping the rest unchanged. Tab. <ref> presents our results, showing that the inclusion of the relative coordinate improves the network's understanding of the MHIF task and has a positive impact on its realization.
3) Weight generation method: To assess the superiority of our cosine similarity method, we conducted a comparison with area-based and network-based weight generation methods on the CAVE dataset, with INF³ serving as the backbone. As shown in Tab. <ref>, our approach significantly outperforms the other methods on certain images, such as `chart and stuffed toy'. To further illustrate the possible spectral distortions in the fused products, we visualized the spectral vectors. Fig.<ref> shows the spectral vectors for the 31 bands at position (276, 260) in the `chart and stuffed toy' image. For the purpose of clarity, we have zoomed in on the spectral vectors of the 18th-22th bands, as indicated by the rectangular boxes in Fig.<ref>. In both the figures, it is evident that the spectral vectors of the proposed method (the red lines) are the closest to the ground truth (GT).
4) Upsampling methods: In this section, we present experiments that compare INF³ with other upsampling methods. Intuitively, INF³ can be regarded as an interpolation algorithm. Unlike traditional interpolation algorithms, it provides each interpolated point with additional relative position information via the MLP layer, which incorporates multi-modal information. Specifically, we compared INF³ with pixel-shuffle <cit.> and traditional interpolation methods that are commonly used in convolutional neural networks. As shown in Tab. <ref>, our INF³ outperforms other methods in terms of MHIF tasks with fewer parameters.
§ CONCLUSION
In this paper, we propose the Implicit Neural Feature Fusion Function (INF³) and design an Implicit Neural Fusion Network (INFN) based on it for multispectral and hyperspectral image fusion task. Unlike previous CNN-based approaches, we novelly fuse multimodal information including coordinate, spatial and spectral data for multiple times, and accordingly modify the previous Implicit Neural Representation of upsampling interpolation to make better use of high-frequency information. By training two different branches of the encoder, the input information is fused in two stages and entered within the INR framework, whose effectiveness in utilizing high-frequency information has also been verified. The INF³-based process also provides a generalized paradigm for other multimodal fusion tasks. Experimental results demonstrate that our method can achieve state-of-the-art performance on two different datasets. Moving forward, we will persist in exploring dependable network-based interpolation fusion methods and stable weight generation techniques.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.07635v1 | 20230714211304 | CoTracker: It is Better to Track Together | [
"Nikita Karaev",
"Ignacio Rocco",
"Benjamin Graham",
"Natalia Neverova",
"Andrea Vedaldi",
"Christian Rupprecht"
] | cs.CV | [
"cs.CV"
] |
[
Volker Tresp
August 12, 2023
===================
Methods for video motion prediction either estimate jointly the instantaneous motion of all points in a given video frame using optical flow or independently track the motion of individual points throughout the video.
The latter is true even for powerful deep-learning methods that can track points through occlusions.
Tracking points individually ignores the strong correlation that can exist between the points, for instance, because they belong to the same physical object, potentially harming performance.
In this paper, we thus propose , an architecture that jointly tracks multiple points throughout an entire video.
This architecture combines several ideas from the optical flow and tracking literature in a new, flexible and powerful design.
It is based on a transformer network that models the correlation of different points in time via specialised attention layers.
The transformer iteratively updates an estimate of several trajectories.
It can be applied in a sliding-window manner to very long videos, for which we engineer an unrolled training loop.
It can track from one to several points jointly and supports adding new points to track at any time.
The result is a flexible and powerful tracking algorithm that outperforms state-of-the-art methods in almost all benchmarks.
Project page: <https://co-tracker.github.io>
§ INTRODUCTION
Establishing point correspondences is a fundamental problem in computer vision, necessary for many downstream tasks.
Here, we are interested in estimating
point correspondences in a video containing dynamic objects and a moving camera.
This task is often referred to as motion estimation and is interpreted as follows.
Given one or more 2D points, which are the projection of certain physical points of the underlying 3D scene, the goal is to find the location of the same physical points in all other frames of the video.
There are two variants of the video motion estimation problem.
In optical flow, the objective is to estimate the velocity of all points within a video frame.
This estimation is performed jointly for all points, but the motion is only predicted at an infinitesimal distance.
In tracking, the goal is to estimate the motion of points over an extended period.
For efficiency and modelling simplicity, tracking methods typically focus on a sparse selection of points and treat them as statistically independent.
Even recent techniques such as TAP-Vid <cit.> and Particle Video Revisited <cit.>, which employ modern deep networks and can track points even in the presence of occlusions, model tracks independently.
This approximation is crude because points are often strongly correlated (e.g., because they belong to the same physical object).
In this paper, we hypothesise that accounting for the correlation between tracked points can significantly improve tracking accuracy.
Thus, we propose a new neural tracker that supports joint tracking of several points in long video sequences.
Our design is inspired by prior work on deep networks for optical flow and point tracking, but with several changes to support joint estimation of multiple tracks and windowed application for processing longer videos.
Our neural network takes as input a video and a variable number of starting track locations and outputs the full tracks (<ref>).
The design is flexible because it allows to track arbitrary points, selected at any spatial location and time in the video.
The network works by taking an initial, approximate version of the tracks and then incrementally refining them to better match the content of the video.
Tracks are initialized trivially: given a track point or a track fragment, the rest of the track is initialized by assuming that the point remains stationary.
In this manner, tracks can be initialized starting from any point, even in the middle of a video, or from the output of the tracker itself when operated in a sliding-window fashion.
All these cases are supported seamlessly by the same architecture.
The network itself is a transformer operating on a 2D grid of tokens: the first grid dimension represents time, and the second is the set of tracked points.
Via suitable self-attention operators, the transformer can consider each track as a whole for the duration of a window and can exchange information between tracks, exploiting their correlation.
We train our network on synthetic TAP-Vid-Kubric <cit.>, which we show to be particularly effective for this task.
We evaluate our tracker on three different benchmarks.
We pay particular attention to the design of the evaluation protocol to ensure its fairness.
Specifically, we note that existing benchmarks contain ground truth tracks which are concentrated on a few foreground objects, potentially revealing the presence of such objects to a joint tracker like ours.
In order to ensure that no ground-truth information is leaked to our tracker, we test different distributions of tracked points.
Specifically, we track one benchmark track at a time but add additional points on a grid to allow the model to perform joint tracking.
In this way, our comparison is fair to single-point trackers.
Our architecture works well for tracking single points and excels for groups of points, obtaining state-of-the-art tracking performance in several benchmarks.
In particular, we improving performance on the TAP-Vid <cit.> and FastCapture <cit.> datasets by a large margin, even for long tracks.
§ RELATED WORK
Optical flow.
Optical flow estimates dense instantaneous motion; it is one of the longest studied problems in computer vision and was initially approached by solving simple differential equations arising from color constancy <cit.>.
Nowadays, optical flow is approached by using deep learning.
Dosovitskiy <cit.> introduced FlowNet, the first end-to-end convolutional network for optical flow, which was later improved in FlowNet2 <cit.> by applying a stacked architecture with warping.
DCFlow <cit.> proposed to construct a 4D cost volume, which captures the relationship between the features of the source and target images. This approach has since become a critical building block in subsequent flow estimation algorithms <cit.>. PWC-Net <cit.> adopted this cost volume but introduced pyramidal processing and warping to reduce the computational cost.
Teed and Dang proposed RAFT <cit.> that preserves high-resolution flow and incrementally updates the flow field. The success of RAFT inspired a wave of subsequent works <cit.> that attempted to improve model efficiency or flow accuracy.
Transformers <cit.> have also been applied to the optical flow problem <cit.>. Flowformer <cit.> drew inspiration from RAFT and proposed a transformer-based approach that tokenizes the 4D cost volume. GMFlow <cit.> replaced the update network with a softmax with self-attention for refinement.
Perceiver IO <cit.> proposed a unified transformer for several tasks including optical flow.
Whereas traditional optical flow algorithms predict motion between two frames, our focus is long-term tracking, which optical flow estimators can only approximate by temporal integration.
Multi-frame optical flow.
There have been several attempts <cit.> to extend optical flow to multiple frames.
Traditionally, Kalman filtering <cit.> was a staple mechanism to ensure temporal consistency.
Modern multi-frame methods produce dense flow.
RAFT <cit.> can be applied in a warm-start manner <cit.> for multi-frame optical flow estimation.
VideoFlow <cit.> extends optical flow to three and five consecutive frames by integrating forward and backward motion features during flow refinement.
However, these methods are not designed for long-term tracking and do not consider points occluded for a long time.
Tracking Groups.
Before the emergence of deep learning, some authors proposed handcrafted joint trackers <cit.>, but there is less work on doing so with modern deep network architectures like ours.
Our work is weakly related to multiple object tracking <cit.> where tracking through occlusions <cit.>, with changes in appearance <cit.>, and with temporal priors <cit.> have been extensively studied.
However, our focus is the tracking of points, not objects.
Tracking any (physical) point.
TAP-Vid <cit.> introduced the problem of tracking any physical point in a video and proposed a benchmark and a simple baseline method for it.
This method computes a cost volume for each pair of frames independently, feeding it to occlusion and coordinate regression branches.
The regression branch is unable to locate occluded points.
Particle Video Revisited <cit.> revisits the classic Particle Video <cit.> problem by introducing a model for point tracking through occlusions. The method tracks selected points with a fixed sliding window and restarts from the last frame where the point is visible.
However, the model loses the target if it stays occluded beyond the size of the window.
Furthermore, while the original Particle Video does track points jointly, Particle Video Revisited and TAP-Vid track points independently.
Several concurrent works have been developed roughly at the same time as our work <cit.>. OmniMotion <cit.> optimizes a volumetric representation for each video during test-time, refining estimated correspondences in a canonical space. MFT <cit.> conducts optical flow estimation between distant frames and chooses the most reliable chain of optical flows. TAPIR <cit.> is a feed-forward point tracker with a matching stage inspired by TAP-Vid <cit.> and a refinement stage inspired by PIPs <cit.>.
Modern trackers and optical flow estimators are commonly trained using synthetic datasets <cit.>, as annotating real data can be challenging.
Synthetic datasets provide accurate annotations, and training on them has demonstrated the ability to generalize to real-world data <cit.>.
§
Our goal is to track 2D points throughout the duration of a video.
We formalise the problem as follows.
A video V = (I_t)_t=1^T is a sequence of T RGB frames I_t ∈ℝ^3 × H × W.
Tracking amounts to producing a track
P^i_t = (x^i_t, y^i_t) ∈ℝ^2,
t =t^i,…,T,
i=1,…,N,
for each of the N points, where
t^i ∈{1, …, T}
denotes the time when the track starts.
Furthermore, the visibility flag v^i_t ∈{0,1} tells whether a point is visible or occluded in a given frame.
We assume that a track's starting point identifies the corresponding physical point unambiguously <cit.>, which requires that the point is visible at the start (, v^i_t_i = 1).
Note that tracks can start at any time 1 ≤ t^i ≤ T during the video.
A tracker is an algorithm that takes as input the video V as well as the starting locations and times (P^i_t^i, t^i)_i=1^N of N tracks, and outputs an estimate
P̂^i_t = (x̂^i_t, ŷ^i_t)
of the tracks for all valid (t ≥ t^i) times.
We also task the tracker with predicting an estimate v̂_i^t of the visibility flags.
Of these values, only the initial ones
P̂^i_t^i =
P^i_t^i
and
v̂^i_t^i =
v^i_t^i = 1
are known to the tracker, and the others must be predicted.
§.§ Transformer formulation
We approach this prediction problem using a transformer network
Ψ : G ↦ O, which we call .
The goal of the transformer is to improve a given estimate of the tracks.
Tracks are encoded as a grid of input tokens G^i_t, one for each track i=1,…,N, and time t=1,…, T.
The updated tracks are expressed by a corresponding grid of output tokens O^i_t.
These tokens are built as follows, using a design inspired by RAFT <cit.> for optical flow and PIPs <cit.> for tracking.
Please see <ref> for a general overview and <ref> the model components.
Image features.
We extract dense d-dimensional appearance features
ϕ(I_t) ∈ℝ^d×H/k×W/k
from each video frame I_t by using a convolutional neural network that we train with the transformer end-to-end.
Here, k=8 is a downsampling factor, utilised for efficiency.
We consider several scaled versions
ϕ(I_t;s)∈ℝ^d×H/sk×W/sk,
of these features with strides s=1,…,S.
These downscaled features are obtained by applying average pooling to the base features in s× s neighbourhoods.
We use S=4 scales.
Track features.
The appearance of the tracks is captured by feature vectors Q_t^i ∈ℝ^d (these are time-dependent to accommodate changes in the track appearance). They are initialized at first by sampling image features, and then they are updated by the neural network, as explained below.
Correlation features.
In order to facilitate matching tracks to images, we introduce the correlation features C^i_t ∈ℝ^S.
These features are obtained by comparing the track features Q and the image features ϕ(I_t;s) around the current track locations P̂_i^t.
Specifically, the vector C_i^t is obtained by stacking the inner products
⟨
Q^i_t, ϕ(I_t;s)[
P̂_t^i / ks + δ
]
⟩,
s = 1,…, S,
δ∈ℤ^2,
δ_1 ≤Δ,
Δ∈ℕ,
where the offsets δ define a neighborhood of point P̂_i^t.
The image features ϕ(I_t;s) are sampled at non-integer locations by using bilinear interpolation and zero padding.
The dimension of C^i_t is (2(Δ^2+Δ)+1 )S=164 for our choice S = Δ = 4.
Tokens.
The input tokens G(P̂, v̂, Q) code for position, visibility, appearance, and correlation of the tracks.
This information is represented by stacking corresponding features:
G^i_t = (
P̂^i_t, logit(v̂^i_t),
Q^i_t,
C^i_t, η (P̂^i_t - P̂^i_1)
).
All the components except the last one have been introduced above.
The last component is derived from the estimated position: it is the sinusoidal positional encoding η of the track location with respect to the initial location at time t=1.
The latter information could be inferred by the transformer by observing P̂_t^i alone, but we found it beneficial to pass it directly as well.
The output tokens O(P̂', Q') contain the updated locations and appearance features only, O^i_t =
(
P̂^i '_t,
Q^i '_t
).
Iterated transformer applications.
We apply the transformer M times in order to progressively improve the track estimates.
Let m=0,1,…,M index the estimate, with m=0 denoting initialization.
Then:
O(P̂^(m+1), Q^(m+1))
= Ψ(G(
P̂^(m), v̂^(0), Q^(m)
)).
Note that the visibility mask v̂ is not updated by the transformer; instead, it is updated once at the end of the M applications of the transformer as
v̂^(M) = σ(W Q^(M)),
where σ is the sigmoid activation function and W is a learned matrix of weights. We found iterative updates for the visibility did not further improve the performance, likely due to the fact that visibility highly depends on predicting an accurate location first.
The quantities P̂^(0), v^(0) and Q^(0) are initialized from the starting location and time of the tracks.
For all tracks i=1,…,N and times t=1,…,T, we simply set
P̂^i,(0)_t ← P^i_t^i,
v̂^i,(0)_t ← 1,
[Q^i,(0)_t]_s ←ϕ(I_t^i;s)[P^i_t^i/ks],
effectively broadcasting t^i to all other times t=1,…,T (both before and after t^i).
§.§ Windowed inference
An advantage of our transformer design is that it can easily support windowed applications in order to process very long videos. Consider in particular a video V of length T' > T longer than the maximum window length T supported by the architecture.
To track points throughout the entire video V, we split the video in
J = ⌈ 2T'/T - 1 ⌉ windows of length T, with an overlap of T/2 frames.[We assume that T is even. The last window is shorter if T/2 does not divide T'.]
In order to process a video, we apply the transformer MJ times: the output of the first window is used as the input to the second window and so on.
Let the superscript (m,j) denote the m-th update of the transformer applied to the j-th window.
We thus have a M × J grid of quantities
(
P̂^(m,j), v̂^(m,j),
Q^(m,j)
),
spanning transformer iterations and windows.
Starting from m=0 and j=1, we initialize these quantities using <ref> as before.
We then apply the transformer M times to obtain the state
(
P̂^(M,1), v̂^(M,1),
Q^(M,1)
).
From this, we initialize
(
P̂^(0,2), v̂^(0,2),
Q^(0,2)
)
for the second window, in a manner similar to <ref>.
Specifically, the first T/2 components of P̂^(0,2) are copies of the last T/2 components of P̂^(M,1); the last T/2 components of P̂^(0,2) are instead copies of the last time t=T/2-1 from P̂^(M,1).
The same update rule is used for
v̂^(0,2),
while
Q^(0,j)
is always initialized with initial track features Q.
After initializing
(
P̂^(0,2), v̂^(0,2),
Q^(0,2)
),
the transformer is applied M more times to the second window, and the process is repeated for the next window and so on.
Finally, any new track is added by extending the token grids using initialization (<ref>).
§.§ Unrolled learning
We found it important to learn the windowed transformer in an unrolled fashion in order to properly handle semi-overlapping windows.
The primary loss is for track regression, summed over iterated transformer applications and windows:
ℒ_1(P̂, P)
=
∑_j=1^J∑_m=1^Mγ^M-mP̂^(m,j) - P^(j),
where γ=0.8 discounts early transformer updates.
Here P^(j) contains the ground-truth trajectories restricted to window j (trajectories which start in the middle of the window are padded backwards).
The second loss is the cross entropy of the visibility flags
ℒ_2(v̂, v)
=
∑_j=1^JCE(
v̂^(M,j),
v^(j)
).
While only a moderate number of windows are used in the loss during training due to the computational cost, at test time we can unroll the windowed transformer applications arbitrarily, thus in principle handling any video length.
Unrolled inference allows tracking points that appear later in the video. We start tracking a point only from the sliding window where it appears first.
We also make sure that such points are present in the training data by sampling visible points from the middle frame of a sequence.
§.§ Transformer
R0.5
< g r a p h i c s >
Number of points. We track queried points from TAP-Vid-DAVIS together with N additional points sampled on the first frame. Adding 16-25 points is best, while more points overwhelm tracking the interest points.
Our transformer consists of interleaving time and group attention blocks with two linear layers applied to the input and to the output.
Factorising the attention <cit.> across time and point tracks makes the model computationally tractable: the complexity is reduced from O(N^2T^2) to O(N^2+T^2).
In addition to position encodings η for estimated trajectories that the transformer takes as input, we add standard sinusoidal position encodings: 1-dimensional for time and 2-dimensional for space.
§.§ Point Selection
One of the key insights of our method is that tracking multiple points simultaneously allows the model to better reason about the motion in the video and the correlation between tracks.
However, care needs to be applied when the method is evaluated on benchmark datasets.
In these datasets, points annotated by humans often lie in salient locations of moving objects.
In order to ensure that our performance figures are robust with respect to point selection, and to ensure a rigorously fair comparison with existing methods, we evaluate the model using only a single target ground truth point at a time.
This decouples the performance of the model from the point distribution in the dataset.
We experiment with two point selection strategies that are visualized in <ref>.
With the “global” strategy, we simply select additional points on a regular grid across the whole image.
With the “local” strategy, we select additional points close to the target point, using a regular grid around the target point, thus allowing the model to focus on a neighbourhood of it.
Point selection is only used at inference time.
§ EXPERIMENTS
Our model is trained with ground truth trajectories on synthetic data.
Evaluation is then performed on four benchmark datasets containing manually annotated trajectories in real videos: TAP-Vid-DAVIS <cit.>, TAP-Vid-Kinetics <cit.>, BADJA <cit.>, and FastCapture <cit.>.
§.§ Datasets and evaluation protocol
We consider several datasets for training and evaluation.
FlyingThings++ is a version of FlyingThings3D <cit.> that Particle Video Revisited <cit.> has re-engineered to learn their tracker PIPs.
It contains videos 8 frames long with flying objects and point tracks randomly selected on these objects.
Occlusion annotations are created by overlaying additional objects on top of the rendered video, so they are only approximate.
TAP-Vid contains four benchmark datasets for evaluation and one synthetic dataset for training.
The latter, TAP-Vid-Kubric, consists of 24-frame sequences rendered from 3D scenes with objects dropped into them, using the Kubric engine <cit.>.
Point tracks are selected randomly, primarily on objects, and some on the background.
Occlusions appear naturally when a point is occluded by another object or moves out of the scene.
We found that training on TAP-Vid-Kubric improves PIPs on all the benchmarks (<ref>), and thus chose this dataset to train our model as well.
We evaluate our models on the TAP-Vid benchmark datasets TAP-Vid-DAVIS (30 videos of ∼100 frames) and TAP-Vid-Kinetics (1144 videos of ∼250 frames from Kinetics <cit.>). In these benchmarks, points are queried on objects at random frames and the goal is to predict positions and occlusion labels of queried points.
In the TAP-Vid “queried first” evaluation protocol, each point is queried only once in the video, at the first frame where it becomes visible. Hence, the model should predict positions only for future frames. However, in the “queried strided” protocol, points are queried every five frames and tracking should be done in both directions. Given that ours (and PIPs) are online methods, which means that they track points only forward, we run them forward and backwards starting from each queried point.
As “queried first” requires estimating the longest tracks, it is a more difficult setting than “strided”.
Moreover, “strided” demands estimating the same track from multiple starting locations and is thus much more computationally expensive. For these two reasons, we focus on the “strided” setting for the evaluation.
We evaluate the trackers using the standard metrics for these datasets:
Occlusion Accuracy (OA), < δ^x_avg (position accuracy for visible points), and Average Jaccard (AJ), averaged across different thresholds.
BADJA <cit.> is a benchmark dataset of animals in motion with keypoint annotations.
These are annotated in the reference frame and the goal is to propagate them to future frames.
A keypoint coordinate is predicted accurately if it is within the distance of 0.2 √(A) from the ground truth coordinate, where A is the area of the ground-truth segmentation mask on the frame.
Accuracy is measured only for visible points. Both BADJA and TAP-Vid-DAVIS are based on the DAVIS dataset <cit.>.
FastCapture is a benchmark of dynamic human reconstructions rendered on a black background.
We sample twenty points on humans in the first frame and evaluate the 3px accuracy.
§.§ Implementation details
In this section, we describe the most important implementation details and refer to the appendix for a complete description.
Training.
We render 11,000 pre-generated 24-frame sequences for TAP-Vid-Kubric with annotations for 2,000 tracked points per sequence. Points are sampled preferentially on objects.
During training, for each sequence, we randomly sample 256 points that are visible either in the first or in the middle frames of the sequence.
We train on TAP-Vid-Kubric sequences of size T'=24 with sliding windows of size T=8 for 50,000 iterations using 32 NVIDIA TESLA Volta V100 32GB GPUs.
It is possible to train the model on GPUs with less memory by decreasing the number of sampled points per batch.
Evaluation.
We evaluate with 6 global and 6 local grid points (<ref>).
All the models are evaluated on videos with a maximum resolution 384× 512.
For the DINOv2 baseline, we compute features using DINOv2 at a resolution of 378× 504. We then bi-linearly sample from the queried feature map and compute the dot product between the queried feature and all the other feature maps. After that, we apply softmax to predict the coordinates of the queried point.
PIPs is trained at a resolution of 384× 512 but evaluated with 320 × 512. Using the released code, we found that 384 × 512 slightly improves PIPs performance on all the benchmarks except BADJA.
For the TAP-Vid benchmarks, we follow the standard protocol and downsample videos to 256× 256 before passing them to the model.
As PIPs and CoTracker are trained on 384 × 512 videos, we resize downsampled frames back to 384 × 512 for them, and to 378 × 504 for DINOv2. All the TAP-Vid metrics are then computed in 256× 256.
§.§ Results
: joint tracking and support grids.
First, in <ref> we compare different support grids (<ref>).
Using the uncorrelated `single target point' protocol, reasoning about tracks jointly (group attn.) in addition to along tracks (time attn.) improves results provided that the correct contextual points are considered (combining local and global grids performs best).
Performance increases even further for the `all target points' protocol, likely because the different target points are indeed highly correlated.
We do not use this protocol when comparing to prior work for fairness.
It is, however, a realistic scenario because a segmentation model can be used to pick such correlated points automatically.
In Tables (<ref>, <ref>) we compare our model (single target point evaluation) to the prior state of the art on the four benchmark datasets. Our approach is able to track points and their visibility more accurately than previous work, supporting the intuition that long-term tracking of points in groups is beneficial to single point models (such as TAP-Net and PIPs) and to dense but short-term optical flow methods (such as RAFT) that tend to accumulate drift.
Training Data.
We evaluate the influence of the training data on the model in <ref>.
Kubric is better for training than FlyingThings++ for the prior state of the art.
FlyingThings++ sequences are too short to train our model which relies on training with sliding windows.
Additionally, Kubric is overall more realistic, as it is created by rendering 3D scenes with naturally occluded objects and physics simulation.
Sliding Window.
In <ref>, we evaluate the importance of unrolling the sliding window scheme during training.
As the benchmark sequences during evaluation are often much longer (>10×), our model greatly benefits from learning to propagate information between windows.
§.§ Limitations
While improves over the prior state of the art, several limitations remain.
As a sliding-window-based method, it cannot track points through occlusions that are longer than the size of a single window.
Additionally, the transformer complexity is quadratic in the number of tracked points, which prevents a trivial application of this technique to dense prediction.
§ CONCLUSIONS
We have presented , a new paradigm for long-term tracking of points in videos.
Our method simultaneously tracks groups of points and thus learns to account for their correlation.
Additionally, we show that training through an unrolled temporally sliding window mechanism allows the model to generalise to long video sequences.
Our method outperforms the current state of the art in point tracking on various benchmarks and produces tracks which are qualitatively cleaner.
§ ACKNOWLEDGMENTS
We would like to thank Luke Melas-Kyriazi for his paper comments, Jianyuan Wang, Roman Shapovalov, Luke Melas-Kyriazi and Adam W. Harley for the insightful discussions. Christian Rupprecht is supported by ERC-CoG UNION101001212 and VisualAI EP/T028572/1.
ieee_fullname
§ IMPLEMENTATION DETAILS
In this section, we complete the description of implementation details from the main paper. Code and models are available on the project webpage: <https://co-tracker.github.io>.
Feature CNN
Given a sequence of T' frames with a resolution 384×512, we compute features for each frame with a 2-dimensional CNN that downsamples the input image by a factor of 8 and outputs features with 128 channels. Our CNN is the same as in PIPs <cit.>: it consists of one 7× 7 convolution with stride 2, eight residual blocks with 3× 3 kernels and instance normalization, and two final convolutions with 3× 3 and 1 × 1 kernels.
Sliding windows
When passing information from one sliding window to the next, we concatenate binary masks of shape (N, T) with visibility logits that indicate where the model needs to make predictions. For example, masks in the first sliding window would be equal to 1 from the frame where we start tracking the point, and 0 before that.
Masks for all the subsequent sliding windows will be equal to 1 for the first overlapping T/2 frames, and 0 for the remaining T/2. During training, tracking starts either from the first or from a random frame where the point is visible. If a point is not visible in the first sliding window, it will be added when it becomes visible first.
Iterative updates
We train the model with M=4 iterative updates, and evaluate with M=6.
This setting is a reasonable trade-off between speed and accuracy as evaluated in <ref>.
Interestingly, the performance is stable across a factor of 2 (2-8) around the training choice M=4.
Training
is trained with a batch size of 32, distributed across 32 GPUs. After applying data augmentations, we randomly sample 256 trajectories for each batch, with points visible either in the first or in the middle frame. We train the model for 50,000 iterations with a learning rate of 5e^-4 and a linear 1-cycle <cit.> learning rate schedule, using the AdamW <cit.> optimizer.
Augmentations
During training, we employ Color Jitter and Gaussian Blur to introduce color and blur variations. We augment occlusions by either coloring a randomly chosen rectangular patch with its mean color or replacing it with another patch from the same image. Random scaling across height and width is applied to each frame to add diversity.
§ ADDITIONAL ABLATIONS
Here we provide additional ablation experiments that supplement the model choices from the main paper.
Feature network
To understand the impact of learning image features, we replace our simple feature CNN with a frozen
DINOv2 <cit.> ViT-s encoder. We take the output of an intermediate layer with 384 channels and linearly project it to 128 channels, as in our Feature CNN. The model is then trained with a resolution of 378x504, ensuring that both height and width are divisible by 14, which corresponds to the DINO patch size.
<Ref> shows that generic features are not precise enough for long-term tracking of points. Additionally, DINO features have been shown to contain strong semantic components which might hinder tracking performance.
Model stride ablation.
In <ref>, we train with two different feature downsampling factors (strides). The model with stride 4 has features with twice the resolution as the model with stride 8, resulting in better performance.
Training sliding window size.
We evaluate the impact of the size of the sliding window (<ref>) during training.
A longer sliding window length can improve the model's ability to handle occluded points.
However, we are limited by the training data which only contains sequences of 24 frames, so learning is less effective for larger window sizes.
Overall, a sliding window length of 8 yields the best results in our setting.
Inference sliding window size.
In <ref> we inspect how the sliding window size of a model trained with sliding window T=16 affects model performance. We find that the model prefers training and evaluation with the same size across two different benchmarks.
If a varying window size is needed, the model could potentially benefit from also training with a variable size.
§ EFFICIENCY
In <ref> we evaluate the efficiency of PIPs <cit.>, RAFT <cit.> and with stride 8 on TAP-Vid-DAVIS <cit.> by running them on a V100 GPU. Points in this dataset can be queried on any frame. Since PIPs estimates the trajectories of every target independently and can only share computation between points when they are queried on the same frame, we run it separately for every point. RAFT estimates dense optical flow between every pair of consecutive frames. We obtain final tracks by composing estimated flows for each queried point starting from the corresponding frame. For , we add points on a regular grid and track them all jointly. We show that inference time increases quadratically with the number of tracked points due to the group attention layers. Nevertheless, in this setting is faster than PIPs and RAFT when tracking <800 points.
has fewer parameters than PIPs (24M vs 29M) but more than RAFT (5M).
§ BROADER SOCIETAL IMPACT
Motion estimation, whether in the guise of point tracking or optical flow, is a fundamental, low-level computer vision task.
Many other tasks in computer vision build on it, from 3D reconstruction to video object segmentation.
Ultimately, motion estimation algorithms are an important component of a very large number of applications of computer vision in many different areas.
Point tracking has no direct societal impact; however, positive or negative effects on society can materialise through its use in other algorithms, depending on the final application.
|
http://arxiv.org/abs/2307.05878v2 | 20230712024455 | A novel approach for regularized inversion of noisy Laplace transforms using real-axis data points | [
"Vladimir V Kryzhniy"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Constraints on Self-Interacting dark matter from relaxed galaxy groups
Shantanu Desai
August 12, 2023
======================================================================
This paper presents a new approach to construct regularizing operators for the inversion of noisy Laplace transforms using a set of data points on the real axis. The effectiveness of the proposed approach is demonstrated through examples of noisy Laplace transform inversions and the deconvolution of nuclear magnetic resonance relaxation data, including experimentally measured data. The software implementation of this method allows for enforcing the positivity of the solution without requiring any additional information.
§ INTRODUCTION
The analysis of exponential relaxation data poses a significant challenge in various fields such as experimental physics, chemistry, electrochemistry, and biophysics <cit.>. This problem involves determining the distribution function f(λ) from experimentally measured function f̂(t) by performing the inversion of the Laplace transformation (<ref>):
f̂(t) = ∫_0^∞e^-tλf(λ)dλ + b,
where the data may contain a constant baseline offset b.
It is evident that only real methods of inverse Laplace transformation can be used for inverting (<ref>) using experimental data. However, seemingly all known real methods of inverse Laplace transformation assume Laplace transforms to be computable at any point required by a method.
Then, the application of such a method for inverting Laplace transforms known at a predefined set of points would require approximating the experimental data. The latter is practically impossible to achieve without corrupting the inverse in question. Thus, a new method that uses only given data points is needed for practical applications.
The most straightforward way to achieve that is to discretize the equation (<ref>) using an appropriate quadrature formula and obtain the corresponding matrix equation:
K 𝐟 = 𝐟̂,
where K is an m × n matrix, and 𝐟̂ and 𝐟 are column vectors of length m and n, respectively.
The matrix equation (<ref>) is inherently ill-posed, and regularization techniques are required to obtain stable solutions <cit.>. However, conventional regularization methods have proven inadequate for accurately inverting Laplace transforms by solving the matrix equation (<ref>) <cit.>.
As a result, the inversion of Laplace transforms is considered severely ill-posed, making it challenging to obtain satisfactory results through regularization. This challenge is not unique to Laplace transform inversions and extends to the solution of Fredholm integral equations of the first kind with smooth, square integrable kernels <cit.>.
The classification of Laplace transform inversion as a severely ill-posed problem may be attributed to two main factors. Firstly, a noisy real-valued Laplace transform may lack sufficient information for computing an accurate inverse. Secondly, imperfections in the numerical implementation of the theory can hinder the achievement of more precise results.
The information content of a noisy real-valued Laplace transform can be assessed by considering alternative methods for inverting real Laplace transforms. In cases where the Laplace transform is computable at any point on the real axis, the author has derived an integral form of the regularizing operator for inverse Laplace transformation <cit.>:
f_r(λ) = ∫_0^∞f̂(u) Π(r,a,α; λ u) d u,
where f_r(λ) represents the regularized solution, and f_r(λ) → f(λ) as r →∞. The parameters a and α are additional parameters of the method.
The exact formula for the kernel Π can be found in the referenced article. By employing software available on GitHub <cit.>, one can invert Laplace transforms of interest and confirm that the integral inversion formula yields satisfactory results.
The information content of real-valued Laplace transforms has been analyzed using the inversion formula (<ref>). The limitations can be summarized as follows:
A Laplace transform can be inverted from the real axis if its theoretical inverse, plotted on a semilogarithmic scale, is monotonic or has few non-sharp extrema.
Additionally, for experimentally measured Laplace transforms, there is an extra limitation: f̂(0) = const, which excludes non-integrable functions.
Typically, a numerical quadrature formula for computing integral (<ref>) uses a few hundred points. It is challenging to believe that the points selected by the quadrature formula provide significantly more information than a similar number of uniformly distributed points known from an experiment.
Thus, with high confidence, we can conclude that a noisy Laplace transform f̂(t) contains more information about the pre-image f(λ) than can be extracted using conventional regularization techniques for solving integral equations.
A similar problem arises in nuclear magnetic resonance relaxometry <cit.>, where the desired distribution is obtained by deconvolving the following integral equation:
f̂(t) =∫_0^∞e^-t/μf(μ)dμ + b.
Although the latter equation can be written as a Laplace transform of a function ϕ(λ) = f(1/λ) / λ^2, a direct regularized solution of equation (<ref>) is advantageous when f̂(t) is noisy.
As observed, the kernel of the integral equation (<ref>) suppresses information about the function f(μ) for small μ. Consequently, it is expected that the accuracy of the restored function will be reduced for small μ, and it may not be restored at all for some μ < μ_min.
§ TOWARDS THE INVERSION OF NOISY LAPLACE TRANSFORMS
Let us provide a brief discussion on why the inversion of real-valued Laplace transforms is classified as a severely ill-posed problem.
By representing matrix K in equation (<ref>) using singular value decomposition (SVD):
K = U D V^T,
where U and V are orthogonal matrices, D = diag(s_i), and s_1 ≥ s_2 ≥…≥ 0, we obtain the formal solution of Eq. (<ref>):
𝐟 = K^-1𝐟̂ = ∑_i=1^n s_i^-1(𝐮_i^T𝐟̂)𝐯_i,
which is unstable due to division by decreasing singular values s_i.
Truncated SVD solution regularizes the problem by limiting the number of terms in equation (<ref>) by a certain number k:
𝐟_r = K_r^-1𝐟̂ = ∑_i=1^k s_i^-1(𝐮_i^T𝐟̂)𝐯_i,
where 𝐟_r and K_r^-1 represent the regularized solution and regularizing operator, respectively.
The number of terms in equation (<ref>) depends on the noise level in the data ϵ and the rate of decrease of singular values s_i <cit.>. For an exponential kernel, the singular values decrease rapidly, resulting in only a few terms in the sum of equation (<ref>). Consequently, the regularized solution 𝐟_𝐫 is represented as a linear combination of a small number of vectors 𝐯_i:
𝐟_r = β_1/s_1𝐯_1 + β_2/s_2𝐯_2 + …,
where β = U^T 𝐟̂.
However, it is evident that such a computed regularized solution is generally inaccurate. This conclusion holds true for all known regularization techniques.
All terms in equation (<ref>) depend on the SVD of the kernel matrix K, which, in turn, depends on the quadrature formula, selected nodes, and the number of points. Although we cannot change the fact that the singular values of the exponential kernel rapidly tend to zero, we can anticipate that a few terms in equation (<ref>) will yield more accurate results with a carefully tailored discretization. Thus, for inverting Laplace transforms by solving the matrix equation (<ref>), we need to find an appropriate discretization and value of the regularization parameter.
This situation bears similarities to computing the inverse f_r by evaluating the integral in equation (<ref>). The accuracy of the computed regularized inverse significantly depends not only on the optimal value of the regularization parameter but also on the optimal values of additional method parameters. The author has proposed a heuristic criterion for determining acceptable values of all method parameters by minimizing the difference between two closely related regularized solutions f_r^(1) and f_r^(2) <cit.>:
min_r, a, α∑_i=1^n[f_r^(1)(a, α, r; λ_i) - f_r^(2)(a, α, r; λ_i)]^2.
In cases where the additional parameters a and α are fixed, the comparison of two regularized solutions (<ref>) becomes a criterion for finding the optimal regularization parameter.
Based on the aforementioned limitations of inverting real-valued Laplace transforms, it can be expected that using semilogarithmic coordinates is more suitable for discretizing the integral (<ref>). In this case, we have three discretization parameters n, λ_min, λ_max, representing the number of points n logarithmically distributed on an integration interval (λ_min, λ_max). Consequently, we encounter a four-dimensional minimization problem involving the regularization parameter.
The above reasoning emphasizes the direction of constructing regularizing operators for solving severely ill-posed problems. It is possible to consider other approaches for parameterizing discretization and combine them with criteria for searching the regularization parameter <cit.>.
§ NUMERICAL EXAMPLES
In order to illustrate the effectiveness of the approach described above, we present a few examples implemented using Python software. These examples aim to demonstrate the validity of the proposed method. The experimental data have been simulated by adding normally distributed noise with a certain standard deviation σ to the exact Laplace transform.
We construct Laplace transform pairs that adhere to the limitations of inverting real-valued Laplace transforms by using a sum of gamma distributions:
f̂(t; α, θ) = (1 + t θ / α)^-α,
f(λ; α, θ) = 1/λΓ(α) (λα/θ)^αexp(-λα/θ),
where Γ(α) represents the gamma function. For α≫ 1, the gamma distribution reaches its maximum at λ≈θ, and as α→∞, f(λ; α, θ) tends to δ(λ - θ).
Figure <ref> demonstrates the restoration of a three-peak sum of gamma distributions. The parameters used in this example are σ = 0.01, a = [1, 2, 6], θ = [0.1, 1, 10], and α = [10, 3, 5]. As shown in the figure, the most significant errors are observed in the restoration of the sharpest peak.
Figure <ref> presents the restoration of the same three-peak sum of gamma distributions, but with reduced sharpness in the first peak. The parameters used for this example are σ = 0.01, a = [1, 2, 6], θ = [0.1, 1, 10], and α = [3, 3, 5]. It can be observed that the errors in restoring the first peak have been reduced.
To further analyze the impact of noise, figure <ref> repeats the previous example but with the noise reduced by a factor of 10. The parameters remain the same as in figure <ref> (σ = 0.001, a = [1, 2, 6], θ = [0.1, 1, 10], and α = [3, 3, 5]). The results shown in the figure are highly satisfactory.
Moving on to the application of the method in nuclear magnetic resonance (NMR) relaxometry, figure <ref> demonstrates the deconvolution of NMR relaxation data. We simulate a three-peak sum of gamma distributions with parameters a = [2, 15, 50], α = [3, 3, 3], θ = [1, 10, 100], and σ = 0.01. The results of deconvolving equation (<ref>) are shown in the figure, indicating successful restoration of all peaks. However, it should be noted that the results for small λ are less reliable, as expected.
Finally, figure <ref> showcases the deconvolution of experimentally measured NMR relaxation data. The noise-like residues depicted in the lower frame of the figure validate that the software performs as expected, providing a good fit to the data.
For additional testing results, as well as the software interface, Jupyter notebooks are available on Google Drive <cit.>. Testing results for inverting noisy Laplace transforms with the computation of derivatives and/or integrals of the pre-image function directly from the noisy Laplace transform (by solving the corresponding integral equation) can also be found <cit.>.
§ CONCLUSION
In this paper, we have presented quite satisfactory results for the regularized numerical inversion of noisy Laplace transforms known on a set of points. In order to invert such transforms, it is essential to introduce an appropriate parametrized discretization and determine the optimal values for all parameters. The optimal values for the regularization and discretization parameters can be computed simultaneously by solving a minimization problem based on a regularization parameter search criterion. This approach has also proven successful in solving the inverse problem of NMR relaxometry.
Hence, it is reasonable to expect that other Fredholm integral equations of the first kind can be resolved more accurately by introducing an appropriate parametrized discretization.
Acknowledgments
The completion of this work would not have been possible without the unwavering support of the author's family. The author expresses special gratitude to his wife, Helen Kryzhnyaya, for supporting his decades-long voluntary research on the inversion of real-valued Laplace transforms.
The author is also thankful to Green Imaging Technology, Inc. for providing experimental data for testing purposes and granting permission to use it in this paper.
References
10
Istratov
Istratov A A and Vyvenko O F 1999
Exponential analysis in physical phenomena,
Rev. Sci. Instrum. 70 1233–57
Kroeker
Kroeker R M and Henkelman R M 1986
Analysis of Biological NMR relaxation data with continuous distributions of relaxation times,
J. Magn. Reson. 69 218–35
Tikhonov
Tikhonov A N and Arsenin V Yu 1977
Solutions of ill-posed problems (Washington D C: Winston and Sons)
Varah
Varah J M 1983
Pitfalls in the numerical solution of linear ill-posed problems.
SIAM J. Sci. Stat. Comput. 4, 164–76
Hansen
Hansen P C 1992
Numerical tools for analysis and solution of Fredholm integral equations of the first kind,
Inverse probl. 8 849–72
Hansen2
Hansen P C 1998
Rank-Deficient and Discrete Ill-Posed Problems. Numerical aspects of Linear Inversion (Philadelphia: SIAM)
kr1
Kryzhniy V V 2006
Numerical inversion of the Laplace transform: analysis via regularized analytic continuation,
Inverse probl. 22 579–97
kr2
Kryzhniy V V 2010
On regularization method for numerical inversion of the Laplace transforms computable at any point on the real axis,
J. of Inverse and Ill-posed probl. 18 409–19
github
<https://github.com/Vladimir-Kryzhniy/inversion-of-real-valued-Laplace-transforms >
gd
<https://drive.google.com/drive/folders/1W_SQjEe-xzFSWRcZY-wLyUt85DbJfzGI?usp=sharing>
|
http://arxiv.org/abs/2307.04436v1 | 20230710092159 | Full event simulation of Photoproduction at NLO QCD in Sherpa | [
"Peter Meinzinger"
] | hep-ph | [
"hep-ph"
] |
=6.0in
=8.25in
=-0.3in =-0.20in
#1
#1
#1
#1
#1
#1
and
#1
Submitted to #1
Abstract
Presented
PRESENTED AT
Bibliography.bib
Born
0.2ex++
Full event simulation of Photoproduction at QCD in
Peter Meinzinger
Institute for Particle Physics Phenomenology,
Durham University, Durham DH1 3LE, UK
Photoproduction is an important mode for the production of jets and electro-weak particles at lepton–lepton and lepton–hadron colliders and allows for interesting studies of exclusive production at hadron–hadron colliders. In this talk, I will review recent efforts of extending the event generator to include the calculation of photoproduction cross sections for electron and proton beams, including the simulation of underlying events. The framework is validated using data of jet production at the and experiments and lepton production at the LHC. I will discuss advances towards achieving matched accuracy and fully capturing the dynamics of inclusive and exclusive photoproduction at different colliders.
DIS2023: XXX International Workshop on Deep-Inelastic Scattering and
Related Subjects,
Michigan State University, USA, 27-31 March 2023
< g r a p h i c s >
§ INTRODUCTION
The cross section of jet production at lepton–hadron or lepton–lepton collider experiments is dominated by the exchange of a virtual photon. While, in particular at the latter, this is well understood at large photon virtualities, the descriptive power of the theoretical calculations deteriorates with decreasing virtuality <cit.>. This has been reflected in decomposing the full cross section into electro- and photoproduction where the latter is identified with a regime where the photon is quasi-real and has to be seen as the incoming particle.
Simulating these events needs a different approach than the typical DIS processes. Here we report on the implementation of relevant physics and its validatation in .
§ SIMULATION IN
§.§ Photon flux
As the electron decouples from the hard interaction in the scattering, the flux of the quasi-real photons has to be calculated. In the Weizsäcker-Williams approximation <cit.> the cross section is calculated as
dσ_e p → e^' + 2j + X = σ_γ p → 2j + X(x, s)|_Q^2=0 dn(x) ,
where the electron momentum can be reconstructed from the photon and the photon virtuality is integrated out in the equivalent photon flux dn, leaving only the maximum virtuality Q^2_max as a free parameter, which has to be determined by the experimental setup and by the considered process.
For the measurements considered in this study, the photon flux for electron beams includes a mass-dependent correction as proposed in <cit.>:
dn(x) = α_em/2 πdx/x[ [ 1 + (1 - x)^2 ] log( Q^2_max/Q^2_min) +
2 m_e^2 x^2 ( 1/Q^2_min - 1/Q^2_max) ]
Here, x is the fraction of the photon momentum with respect to the electron momentum, m_e is the electron mass and Q_min/max are the minimum and maximum photon virtualities, where the former is given by kinematic constraints as Q^2_min = m_e^2 x^2/1 - x.
§.§ Parton distributions in the photon
Initial State Radiation off the photon cannot be neglected in photoproduction of jets, necessitating the inclusion of the resolved photon component in the calculation, i.e. its hadronic structure. Hence <cit.>:
dσ_γ p → 2 j + X = dσ_γ p → 2 j + X^(hl) + dσ_γ p → 2 j + X^(pl) , with
dσ_γ p → 2 j + X^(hl) = ∑_ij∫dx f_i/γ(x, μ_F^') f_j/p(x, μ_F) dσ̂_ij (p_γ, x p_p, α_S, μ_R, μ_F, μ_F^')
dσ_γ p → 2 j + X^(pl) = ∑_j ∫dx f_j/p(x, μ_F) dσ̂_γ j (p_γ, x p_p, α_S, μ_R, μ_F, μ_F^') ,
where the superscripts stand for the hadron- and point-like photon respectively, the f_i/A are the parton distribution functions (PDFs) related to finding parton i in particle A, the μ_F, R are the factorisation and renormalisation scales, and p are the momenta.
The photon PDF obeys an evolution slightly different to hadronic PDFs, due to the presence of a QED splitting kernel, leading to
∂ f_i/γ/∂logμ^2 = α_S/2 π∑_j P_ij⊗ f_j/γ + α_em/2 π P_iγ
with P the splitting kernels and where the first term is the usual QCD evolution and the latter the QED evolution stemming from a photon splitting into two quarks.
Photon PDFs from Glück-Reya-Vogt <cit.>, Glück-Reya-Schienbein <cit.>, Slominski-Abramowicz-Levy <cit.>, and Schuler-Sjöstrand <cit.> have been included in .
As exemplified in a comparison between two PDF sets (the SAS1D paramterisation by Schuler-Sjöstrand and the set by Slominski-Abramowicz-Levy) in Fig. <ref>, there are large deviations, especially in the gluon distribution function.
The distinction of direct and resolved processes can not be maintained at Next-to-Leading-Order (NLO) due to the ambiguity of real emissions. While the resolved-photon processes can be computed at analogously to jet production in p-p collisions, the direct-photon processes show divergences stemming from the photon splittings P_iγ. However, in <cit.> it was shown that these divergences cancel against the resolved-photon cross-section as these splittings are re-absorbed into the PDF by means of the inhomogenous term proportional to P_iγ in the evolution equation. Hence, these divergences can be subtracted from dσ_γ p → 2 j + X^(pl) and care only has to be taken to use a photon PDF with the correct evolution and the same factorisation scheme as in the matrix element generation. The calculation can then be matched to the parton shower with the prescription. The main difference lies in the fact that momentum fractions have to be calculated with respect to the variable photon energies instead of fixed beam energies.
§ VALIDATION
For validation, the simulation has been compared to data from the and colliders, namely photoproduction of one or two jets at the ZEUS, OPAL and L3 experiments. Typical observables in these analyses are the (average) jet transverse energy E_T, pseudo-rapidity η, cosΘ^*, which approximates the angle between two jets and x_γ^±, which is defined as
x_γ^± = ( ∑_j=1,2 E^(j)± p_z^(j)) / ( ∑_i∈ hfs E^(i)± p_z^(i) )
and works as a proxy to experimentally distinguish the direct from the resolved modes.
In Fig. <ref> we studied at LO, where all PDF sets could be used, the uncertainties from the different PDF parametrisations and found significant deviations, in agreement with the large discrepancies in the parton distributions. This underlines the need for a new fit to the available data and a more thorough study of the parton distribution of the real and quasi-real photon. Overall, the simulation shows good agreement with the data within the uncertainties. The results at , cf. Figs. <ref> and <ref>, were generated as an average over the SAS1M and SAS2M pdf sets, which use the scheme.
§ OUTLOOK
§.§ Minimum Bias photoproduction for the LHC
Multiple-parton interactions are non-negligible in photoproduction <cit.> and the implementation based on <cit.> has been extended to also cover parametrisations of γ p and γγ interactions.
One object of study could be the simulation of Minimum Bias events where interactions are not only allowed between the two proton beams, but also the photon–proton and photon–photon systems to examine systems with rapidity gaps at the LHC.
When studying semi-diffractive processes, e.g. at the LHC, the LUXqed PDF can be used to access both the elastic and the dissociative contributions to the photoproduction processes.
§.§ Diffractive photoproduction and pomeron exchange
The diffractive production of jets is often understood in terms of a pomeron exchange which is factorized into a pomeron flux and a pomeron parton distribution. At the factorisation was observed to break down, so there's ongoing interest to understand this phenomenon <cit.>, especially in view of the upcoming Electron-Ion Collider.
The implementation of the pomeron flux is work in progress in .
§ SUMMARY
We showed progress in to include photoproduction at various colliders and achieving matched accuracy in QCD. The validation has been done at LO and is ongoing for . We also discussed several ideas how to extend the framework further and use it in experimental studies at the LHC and the EIC.
|
http://arxiv.org/abs/2307.04237v2 | 20230709175337 | Study of exponential wormhole metric in $f(R)$ gravity | [
"Partha Pratim Nath",
"Debojit Sarma"
] | gr-qc | [
"gr-qc"
] |
*
=18 pt
1,a]Partha Pratim Nath
2,a]Debojit Sarma
[a]Department of physics, Cotton University
[1][email protected]
[2][email protected]
Study of exponential wormhole metric in f(R) gravity
[
August 12, 2023
====================================================
In this work, we have studied an "exponential form" of spacetime metric:
ds^2 = -e^-2m/rdt^2 +e^2m/rdr^2 + e^2m/r[r^2 dθ^2 + r^2 sin^2θ dϕ^2]
in some of the viable f(R) gravity models, viz. exponential gravity model, Starobinsky gravity model, Tsujikawa model and Gogoi-Goswami f(R) gravity model. Here we have calculated the parameters including energy density, tangential and radial pressure for these corresponding models of f(R) gravity. Subsequently we have investigated the energy conditions viz. null energy condition(NEC), weak energy condition(WEC) and strong energy condition(SEC) for the considered models. We have also explained the suitable conditions of energy for these models by related plots.
Keywords. Wormhole Geometry, Modified gravity, Energy Conditions.
§ INTRODUCTION
Wormholes in General Relativity belong to a special class of solutions to the Einstein's Field Equations which act as tube-like or bridge-like that connects two distinct points of the same space-time of two different universes. The tubular structure is considered to be asymptotically flat on both sides. One of the main features of wormhole is the wormhole throat, which can be defined as a two dimensional hypersurface of minimal area or the point where the radius is minimum <cit.>. The concept of this bridge-like structure was first constructed by Einstein and Rosen which is known as Einstein-Rosen bridge <cit.>. They inspected the exact solution that describe the geometry of the bridge. Their solution is linked with the work of Ludwig Flamm <cit.>, who for the first time constructed the isometric embedding of Schwarzschild solution but his solutions sustained some stability problems. Hermann Weyl in 1928 <cit.> proposed a wormhole hypothesis of matter in connection with mass analysis of electromagnetic field theory. However he used the term "one-dimensional tube" instead of the term "wormhole". Later Ellis <cit.> gave another term for wormhole known as "Drainhole". Then Wheeler <cit.> named them as "Geons" and predicted the shape of the wormhole which offers a twofold space. Wheeler and Misner <cit.> coined the term "wormhole" and later his solutions were transformed into Euclidean wormholes by Hawking <cit.> and others. These theoretical objects lead to various static and non-static in proportion to the fixed or variable radius of the wormhole throat. Shortly afterward, Kar <cit.> discussed the static wormhole and inquired into their properties with examples. Kar and Sahdev <cit.> explored evolving Lorentzian wormholes. Kar wormholes have a quantum structure and connect different points of the space on the Planck Scale. All these wormholes were not stable and traversable.
Traversability is also an important feature of wormhole. If anything that enters through one side of the wormhole can exit through the other, the wormhole is traversable. In order to become traversable, the wormhole should not contain a horizon, because the presence of the horizon would prevent the two-way travel through the wormhole. Morris-Thorne <cit.> gave the idea of traversable wormhole with some new concepts such as throat. They examined the static spherically symmetric wormholes by using the principles of General Relativity and introduced the fundamental theory for traversable wormhole. The energy-momentum tensor of the matter supporting such geometries, the wormhole throat necessitates the introduction of exotic matter <cit.>, which leads to the violation of the null energy conditions(NEC) and the averaged null energy conditions(ANEC) <cit.> near the throat region. Exotic matter is a form of dark energy(having an EoS with ω<-1/3),produces a repulsion. Recent observations have shown that the dark energy is solely responsible for the accelerated expansion of the universe. After that wormholes have been studied from various aspects and conditions <cit.>. Since the exotic matter is a troublesome issue and thus many justifications have been presented in favor of the violation of the energy conditions such as invoke quantum fields in curved spacetime, scalar-tensor theories <cit.> and so on. Many efforts have been made to reduce the use of exotic matter. "Volume integral quantifier" is one of the most famous proposition which quantifies the total amount of energy condition violating matter <cit.>. Further Nandi and others <cit.> improved this formulation to know the exact quantity of the exotic matter present in the given spacetime. Additionally, there have been proposals regarding confinement of exotic matter at the throat of the wormhole, viz. cut and paste method <cit.>. To avoid the energy violations, thin-shell wormholes <cit.> were studied, where ordinary matter is concentrated on the throat of the wormhole. In Recent years, wormhole solutions are developed using the background modified gravity theories such as Kaluza-Klein gravity <cit.>,Born-Infeld theory <cit.>, Brans-Dicke theory <cit.>, mimetic theories <cit.>, f(R) gravity <cit.>, Einstein-Gauss-Bonnet theory <cit.>, Einstein-Cartan theory <cit.>. It has been shown in modified theories of gravity that the matter inside the wormhole may satisfy the necessary energy conditions but the effective stress-energy tensor <cit.> containing higher order derivatives is responsible for the violation of the Null Enenrgy Condition(NEC).
The modified or extended theories of gravity have proposed some logical explanation to some observational phenomena that can be hardly explained through the General Theory of Relativity. For example, dark energy <cit.>, dark matter <cit.>, massive pulsars <cit.>, super-Chandrasekhar white dwarfs <cit.> etc can be explained with the help gravity theories such as f(R) <cit.>, f(τ) <cit.>. Here R and τ being, respectively, the Ricci and torsion scalars. One of the easiest modification of the Einstein Hilbert action is the f(R) theory of gravity, in which the curvature scalar or Ricci scalar R in gravitational action is replaced by f(R), which is an arbitrary function of the Ricci scalar R <cit.>. Buchdahl <cit.> in 1970, first proposed the f(R) gravity model.
The field equations obtained in f(R) theory are very complicated and have a larger set of solutions than that of General Theory of Relativity. Bertolami <cit.> and others simplified this theory which provides a coupling between matter and the function f(R) that leads towards an extra force which may explain the acceleration of the universe <cit.>. By using f(R)=R+α R^2, Starobinsky <cit.> first derived early inflamatory universe solution long before the effectuality of the inflamaton was known. The late time cosmic acceleration has been explained by Carroll et al <cit.>. in the contex of f(R) gravity. Many researchers have studied numerous viable cosmological models in f(R) gravity <cit.>. Bertplami, Sotiriou, Harko and others explored the coupling of an arbitrary function of R with the matter Lagrangian density. Limiting from strong lensing, f(R) gravity was studied in Patalini formalism by Yang and Chen <cit.>. Capozziello and Laurentis <cit.> had given a different approach to dark matter problem in the context of f(R) gravity. Bronnikov and Starobinsky <cit.> demonstrated that wormholes can not be formed in dark matter models governed by scalar tensor theory even in the presence of electric and magnetic field. Bronnikov et al. <cit.> has showed that for df/dR=F(R)<0, the non-existence of wormhole could be violated in f(R) theory of gravity. Both Brans-Dicke theory and f(R) gravity were considered to obtain wormhole. It is shown <cit.> that no vacuum wormmhole exists in Brans-Dicke theory but exists in f(R) gravity if it satisfies an extremum where the effective gravitational constant changes its sign. Bronnikov et al. <cit.> have shown a no-go theorem in General Relativity for obtaining wormhole solutions, according to which it eliminates the existene of wormholes with flat or AdS asymptotic regions on both sides of the throat where the source matter is isotropic.
In f(R) gravity, Lobo and Oliveira <cit.> wormhole solutions where the matter threading the wormhole is a fluid which satisfies the energy conditions, the higher order terms of f(R) theory gives rise the energy violations. Beato et al. <cit.> have shown that exotic matter is not mandatory for constructing traversable wormhole. Similarly Harko et al. <cit.>, Pavlovic and Sossich <cit.> also shown that the f(R) theory can describe the wormhole geometry without any kind of exotic matter. Mazharimousavi and Halilsoy <cit.> also constructed traversable wormhole model that satisfies energy conditions. The exact solutions of traversable wormholes with non-constant Ricci scalar have been obtained by Golchin and Mehdizadeh <cit.>. On the other hand, Restuccia and Tello-Ortiz <cit.> have given new class of f(R) gravity model and studied cosmological parameters. Spherically symmetric Lorentzian wormholes <cit.> also have been investigated with a constant scalar curvature in quardatic f(R) gravity. De Benedictis and Horvat <cit.> showed the existence of wormhole throat in f(R) gravity and also studied their porperties. On the other hand Sharif and Zahra <cit.> investigated the wormhole solutions for isotropic, anisotropic fluids and barotropic equation of state with the radial pressure. In numerous f(R) models, wormholes have been studied using Karmarkar condition <cit.>. Many physicist studied wormholes in various f(R) gravity models using different redshift and shape functions <cit.>. Vittorio et.al. developed some astrophysical techniques to detect wormholes and in the same time to reconstruct the solution once they have been observed <cit.>.
Here in this work, we investigate the so called exponential wormhole metric in f(R) gravity. For more than 60 years, this metric has been investigated by many researchers. This metric has some charming properties that it passes almost all of the standard lowest order weak field test of General Relativity, but strong field actions and medium field actions are very different. The paper is assembled as follows. In Sec.II, we have studied the exponential wormhole metric in General Relativity. Where we have studied various properties such as throat radius, Karmarkar condition, field equations, flare-out condition, Ricci convergence condition etc. In Sec.III, we have studied the exponential wormhole metric in modified f(R) gravity model. First we have constructed the field equations and with the help of these we studied the exponential wormhole metric in four viable f(R) gravity model. Finally we present results and discussion in the Sec.IV.
§ EXPONENTIAL WORMHOLE METRIC IN GENERAL RELATIVITY
The exponential wormhole metric <cit.>,
ds^2 = -e^-2m/rdt^2 +e^2m/rdr^2 + e^2m/r[r^2 dθ^2 + r^2 sin^2θ dϕ^2],
has an attractive feature in weak fields <cit.>, that is when 2m/r<<1, we have
ds^2=[-dt^2+dr^2+r^2(dθ^2 + sin^2θ dϕ^2)]+2m/r[dt^2+dr^2+r^2(dθ^2 + sin^2θ dϕ^2)].
i.e.,
g_ab=η_ab +2m/rδ_ab.
As this matches with the lowest order field expansion, so the exponential metric will pass all the lowest order weak field test of General Relativity. But strong field and medium field behaviour are very different <cit.>.
§.§ Wormhole throat
In order to find the radius of the throat of the wormhole, let us consider the area of the spherical surface,
S(r)= 4π r^2 e^2m/r.
For the extremum value of S(r),
d S(r)/dr=8π (r-m)e^2m/r=0,
which gives r=m, which is the radius of the throat. Because at r=m, we find that
d^2 S(r)/dr^2= 8π e^2>0,
i.e., the area has a minimum at r=m. Again, all the metric components are finite and the diagonal components are non-zero at the throat (i.e. at r=m).
§.§ Karmarkar Condition
For a static and spherically symmetric line element to be class one, Karmarkar <cit.> developed a mandatory condition. For the exponential metric, some of Riemann curvature tensors are,
R_1414= 2m(m-r)e^-2m/r/r^4 ; R_1212=-me^2m/r/r; R_1224=R_1334=0;
R_3434=-m(m-r)sin^2 θ e^-2m/r/r^2; R_2323=-m(m-2r)sin^2 θ e^-2m/r,
These Riemann components fulfilling the well known Karmarkar relation,
R_1414=R_1212R_3434+R_1224R_1334/R_2323
with R_2323≠ 0. The spacetime, that satisfies the Karmarkar condition is known as embedding class one. Now, by substituting the non-zero Riemann components in the above relation, we get
m^2(2m-3r)(m-r)sin^2θ/r^4=0.
Solving the above relation, we will get m=0 or m=r or m=3/2r. We have found that the throat radius is r=m. That means the exponential wormhole metric fulfills the requirement of class one at or near the throat. But m=0 is forbidden due to the flare out condition.
§.§ Field equations
For the exponential metric, the explicit forms of the non-zero Einstein tensor components are,
G^r_r=-G^t_t=-G^θ_θ=-G^ϕ_ϕ=-m^2 e^-2m/r/r^4.
In the units of c=1, G=1, it leads to,
ρ(r)=p_r(r)=-p_t(r)=-m^2 e^-2m/r/r^4.
Where ρ, p_r and p_t stand for energy density, radial and tangential pressure respectively. From the above relation we can see that,
ρ+p_r=-2 m^2 e^-2m/r/r^4<0,
ρ + p_t=0
and
ρ + p_r+ 2 p_t=0
At the throat, the above takes the value,
ρ+p_r= -2/(me)^2<0;ρ + p_t=0;ρ + p_r+ 2 p_t=0
and
ρ =-1/(me)^2<0.
In terms of the principal pessures, the energy conditions are given as,
Null Energy Condition(NEC): ρ+p_r≥0, ρ+p_t≥ 0
Weak Energy Condition(WEC): ρ≥0, ρ+p_r≥0, ρ+p_t≥ 0
Strong Energy Condition(SEC): ρ≥0, ρ+p_t≥0, ρ+p_r+2 p_t≥0
It is seen that the exponential wormhole metric partially violates all the energy conditions, specially at or near the throat. Which can be more clearly visible from the FIG(<ref>).
§.§ Flare-out condition
The flare-out condition is more understandable through the embedding geometry. The embedded spacetime at t=constant and θ=π/2 for the exponential wormhole metric is given by,
ds^2_e= e^2m/r[dr^2+ r^2 dϕ^2]
In three dimensional Euclidean space the embedded surface has equation z=z(r), so that the metric of the surface can be written as,
ds^2_e=[1+(dz/dr)^2]dr^2+e^2m/rr^2 dϕ^2
Comparing the relations Eq.(<ref>) and Eq.(<ref>), we get
dz/dr=±(e^2m/r-1)^1/2.
Here, we observe that dz/dr→ 0 as r→∞. Which implies that the space is asymptotically flat <cit.>. Now, the flare-out condition is given by the minimality of the wormhole throat as,
d/dz(dr/dz)=m e^2m/r/r^2(e^2m/r-1)^2>0
i.e., m>0. Again, for the exponential wormhole metric, surface tension τ is given as <cit.>,
τ=m^2 e^-2m/r/r^4
Usually, the exoticity function ζ is used for the flare-out condition, which is given as,
ζ=τ-ρ/|ρ|>0.
For the exponential wormhole metric, the value of ζ comes out as,
ζ=τ-ρ/|ρ|=2>0
So, for the exponential wormhole metric (τ-ρ)>0 everywhere, i.e. the metric obeys flare-out conditions everywhere. It was assumed that the wormhole should have a large surface tension compared to the energy density to continue the geometry. This condition seems to be physically reasonable. This violates the Weak Energy Condition(WEC)or averaged Weak Energy Condition to minimize the use of exotic matter.
§.§ Curvature tensor
The non-zero components of Riemann tensor, Rici tensor, Ricci curvature scalar, Kretschmann scalar and other related scalars for the exponential metric are,
R^tr_tr=-2R^tθ_tθ=-2R^tϕ_tϕ=2me^-2m/r(r-m)/r^4,
R^rθ_rθ=R^rϕ_rϕ=-m/r^3e^-2m/r,
R^θϕ_θϕ=m(2r-m)/r^4e^-2m/r,
R^a_b=-2m^2/r^4e^-2m/r diag(0,1,0,0)^a_b,
R=-2m^2/r^4e^-2m/r,
R_abcdR^abcd=4m^2(12r^2-16mr+7m^2)/r^8e^-2m/r,
C_abcdC^abcd=16m^2(3r-2m)/3r^8e^-2m/r,
R_abR^ab=4m^4/r^8e^-2m/r
The non zero Electric parts of the Weyl tensors are,
E_θθ=E_ϕϕ=-2E_rr=2m(r-m)/3r^4e^-2m/r,
E_tt=-m(m+2r)/3r^4e^-6m/r,
E_abE^ab=m^2(-4mr(2+7r^4)+m^2(4+17r^4)+4(r^2+5r^6)+4(m-r)^2 ^4θ/9r^12e^-8m/r
All these components are finite and they donot diverge at r=0 and r=m, they decreases to zero both as r→∞ and as r→ 0. So we can say that the exponential wormhole metric doesnot contain any kind of Weyl and Oscillating Ricci singularity[ref].
The four curvature invariants viz. Ricci scalar, the first two Ricci invariants and the real component of the Weyl invariant are <cit.>,
R= -2m^2/r^4e^-2m/r,
r_1=1/4S^b_a S^a_b =3m^4/4r^8e^-4m/r,
r_2=-1/8S^b_a S^a_c S^c_b=3m^6/8r^12e^-6m/r
and
ω_2=-1/8C̅_abcdC̅^abefC̅^cd_ef=-32m^3(2m-3r)^3/9r^12e^-2m/r.
Where S_ab=R_ab-1/4g_abR. When the curvature invariants are plotted FIG(<ref>), it is seen that they are all nonzero and depend only on the radial coordinate r which indicates spherical symmetry. Again they are finite at r=m(at the throat) and decay to zero as r→∞. R and ω_2 have a minima near the throat whereas r_1 and r_2 have a maxima. These plots are finite everywhere indicating the absence of horizon. So we can conclude that the exponential wormhole metric represents a traversable wormhole.
§.§ Ricci convergence
Any Lorentzian spacetime is said to fulfil the timelike, null and spacelike Ricci convergence condition if for all timelike, null or spacelike vectors t^a one has <cit.>,
R_abt^at^b ≥ 0
Now, for the exponential wormhole metric, we can see that,
R_ab=-2m^2/r^4diag(0,1,0,0)_ab.
So the Ricci convergence condition leads to,
R_abt^at^b=-2m^2/r^4(t^r)^2 ≤ 0.
So, the exponential wormhole metric violates the null Ricci convergence condition for all timelike, null and spacelike vectors. Again we can show that,
R_ab=-2m^2/r^4diag(0,1,0,0)_ab=-1/2∇_a(2m/r)∇_b(2m/r)=-1/2∇_aΦ∇_bΦ.
and
G_ab=-1/2[∇_aΦ∇_bΦ-1/2g_ab(g^cd∇_c Φ∇_d Φ)]
i.e. Einstein equation for a negative kinetic energy massless scalar field, a ghost or phantom field. The contracted Bianchi identity G^ab_;b gives the scalar field equation of motion (g^ab∇_a ∇_b)Φ=0, which indicates that the exponential wormhole metric represents a traversable wormhole metric <cit.>.
§ EXPONENTIAL WORMHOLE METRIC IN F(R) GRAVITY
The gravitational action for f(R) gravity can be defined as,
S=1/2k∫ [f(R)+L_m]√(-g)d^4x,
where k=8π G, L_m and g stand for the matter Lagrangian density and the determinant of the metric g_μν respectively. Here, for simplicity, we will consider k as unity. Now varying the Eq.(<ref>) with respect to the metric g_μν gives the field equations as,
FR_μν-1/2fg_μν-∇_μ∇_νF+ F g_μν=T^m_μν,
where R_μν represents Ricci tensor and F=df/dR. Now we can consider the contraction of Eq.(<ref>) to obtain the relation,
FR-2f+3 F=T.
Where R=g^μνR_μν and T=g^μνT_μν represent Ricci scalar and trace of stress energy tensor respectively. Combining the Eqs.(<ref>) and Eq.(<ref>), the effective field equations are calculated as,
G_μν≡ R_μν-1/2Rg_μν=T^eff_μν, with T^eff_μν=T^c_μν+T^m_μν/F,
where
T^c_μν=1/F[∇_μ∇_ν F-1/4(FR+ F+T)].
The energy momentum tensor for the matter source of the wormholes is T_μν=∂ L_m/∂ g^μν, which is defined as,
T_μν=(ρ+p_t)u_μ u_ν-p_t g_μν+(p_r-p_t)X_μ X_ν,
such that
u^μu_ν=-1 and X^μX_ν=1,
where u_μ is the four velocity and X_μ is the unit space-like vector. Again ρ, p_r and p_t are energy density, radial pressure and tangential pressure respectively. Now Einstein's field equation for the metric Eq.(<ref>) in f(R) gravity can be solved as,
ρ=-e^-2m/r(e^2m/rHr^4+m^2 F(r)+mr^2F^'(r))/r^4,
p_r=e^-2m/r(e^2m/rHr^4-m^2F(r)+mr^2 F^'(r)+r^4 F^''(r))/r^4
and
p_t=e^2m/r(e^2m/rHr^4+m^2F(r)-mr^2F^'(r)+r^3F^'(r))/r^4
where, H=1/4(FR+ F+T) and F^'=dF(r)/dr and F^''=d^2 F(r)/dr^2.
Bronnikov and Starobinsky <cit.> considered the stability condition for wormhole geometry which is free from ghosts. They showed that no realistic wormhole can be constructed in scalar-tensor models for a positive scalar function. In f(R) gravity, the non-existence of wormhole could be disobeyed if df/dR=F(R) is negative <cit.>. In agreement with classical General Theory of Relativity, the violation of Null Energy Condition(NEC) denoted as ρ+p_r ≥ 0, ρ+p_t ≥ 0 and Weak Energy Condition (WEC) denoted as ρ≥ 0, ρ+p_r ≥ 0, ρ+p_t ≥ 0 are mainly due to the presence of exotic matter. The wormhole throat mainly doesnot respect the NEC <cit.>. In order to check the the necessary NEC, we simplify our calculations for ρ+p_r and ρ+p_t. For the exponential wormhole metric, to obey the NEC,
ρ+p_r=e^-2m/r(-2m^2 F(r)+r^4 F^''(r))/r^4
and
ρ+p_t= e^-2m/r(-2m+r)F^'(r)/r^2
should be positive at the throat. In the next section, we will discuss exponential wormhole solutions under the influence of four different viable f(R) gravity models.
§.§ The Exponential Gravity Model
The exponential gravity model was introduced and investigated by Cognola <cit.>. This model can describe the inflation of early universe and accelerated expansion of the current universe. The exponential model is defined as,
f(R)= R-μ R_0[1-e^-R/R_0]
where μ and R_0 are arbitrary constants. Now the equations Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>) reduces to,
ρ= e^-4m/r/r^8 R_0[-e^2m/rr^4(m^2+e^2m/rHr^4)R_0+e^2m^2 e^-2m/r/r^4 R_0m^2(4m(m-2r)+e^2m/rr^4 R_0)μ],
p_r= e^-6m/r/r^12 R_0^2[-16 e^2m^2 e^-2m/r/r^4 R_0m^4(m-2r)^2μ -r^4 R_0[e^4m/rr^4[-m^2+Hr^4 e^2m/r]R_0+ e^2m(r^3+m e^-2m/r/R_0)/r^4m^2
[-12m^2+48mr-40r^2+r^4 R_0 e^2m/r]μ]]
and
p_t= e^-4m/r/r^8 R_0[e^2m/rr^4(m^2+e^2m/rHr^4)R_0+e^2m^2 e^-2m/r/r^4 R_0m^2(4m^2-12mr+8r^2-e^2m/rr^4 R_0)μ]
Here we use m=1 and evaluate the graphical behaviour of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR
From the graph we observe that,
* If μ=+ve, R_0=-ve, then ρ+p_r ≥ 0
* If μ=+ve, R_0=-ve or μ = -ve, R_0= +ve, then ρ+p_t ≥ 0
* ρ≤ 0 for all combinations of values of μ and R_0
* ρ+p_r+2 p_t ≥ 0 for all combinations of value of μ and R_0.
* df/dR>0 for μ=-ve, R_0=+ve or μ=-ve, R_0=-ve and df/dR<0 for μ=+ve, R_0=-ve or μ=+ve,R_0=+ve.
So we can conclude that if μ = +ve, R_0= -ve, the necessary NEC is respected throughout the wormhole geometry and F=df/dR<0, but WEC and SEC is partially violated. So for this combination of μ and R_0, we get wormhole solution which violates the non-existence theorem with the presence of negligible amount of exotic matter. Whereas in the case of General Relativity, NEC is violated by exponetial wormhole metric.
§.§ Starobinsky f(R) gravity model
This model was proposed by Satrobinsky <cit.> which is one of the most recognized f(R) gravity model. It is consistent with cosmological conditions and satisfies solar system and laboratory tests. Starobinsky model is given as,
f(R)=R+a R_0[(1+R^2/R_0^2)^-l-1],
where a, R_0 and l are free parameters. The field equations Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>) of the exponential wormhole metric in f(R) gravity model reduce as,
ρ= -H+m^2 [-e^-2m/r/r^4+
4alm(1+4m^4 e^-4m/r/R_0^2 r^8)^-lR_0(4m^4(m+4lm-4(r+2lr))+e^4m/r(-3m+4r)r^8R_0^2)/(4m^4+ R_0^2 r^8 e^4m/r)^2],
p_r= 1/r^4(4m^4+r^8 R_0^2 e^4m/r)^3 e^-2m/r[1+4m^4 e^-4m/r/r^8 R_0^2]^-l[64a e^2m/rlm^12r^4 R_0+768 e^2m/rl^2 m^12r^4 R_0+
1024a e^2m/rl^3m^12r^4 R_0-512ae^2m/rlm^11r^5R_0-3072ae^2m/rl^2 m^11r^5R_0-4096ae^2m/rl^3 m^11r^5R_0
+768ae^2m/rlm^10r^6R_0+3584ae^2m/rl^2m^10r^6R_0+4096ae^2m/rl^3m^10r^6R_0-416ae^2m/rlm^8 r^12R_0^3
-576ae^6m/rl^2m^8r^12R_0^3+1536ae^6m/rlm^7r^13R_0^3+2304ae^6m/rl^2m^7r^13R_0^3-1536ae^2m/rlm^6r^14R_0^3
-2176ae^6m/rl^2m^6r^14R_0^3+20ae^10m/rlm^4r^20R_0^5-96ae^10m/rkm^3r^21R_0^5+80ae^10m/rlm^2r^22R_0^5
-(m^2-Hr^4e^2m/r)(1+4m^4e^-4m/r/r^8R_0^2)^l(4m^4+r^8R_0^2 e^4m/r)^3]
and
p_t= H+m^2[e^-2m/r/r^4+
1/(4m^4+r^8 R_0^2e^4m/r)^2[4al(1+4m^4 e^-4m/r/r^8R_0^2)^-lR_0(4m^4((3+4l)m^2+4(1+2l)r^2-6m(r+
2lr))-e^4m/rr^8(m^2-6mr+4r^2)R_0^2)]]
After plotting the graphs FIG(<ref>) and FIG(<ref>) of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR, we get the following analysis,
If l=+ve
* If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_r ≥ 0 (for r>1.2).
* If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_t ≥ 0.
* ρ≤ 0 for all combinations of a and R_0.
* ρ+p_r+2p_t ≥ 0 for all combinations of a and R_0.
* df/dR>0 for all combinations of a and R_0.
If l=-ve
* If a=-ve, R_0=+ve or a=+ve, R_0=-ve, then ρ+p_r ≥ 0 (for r>1.13).
* If a=-ve, R_0=+ve or a=+ve, R_0=-ve, then ρ+p_r ≥ 0.
* ρ≤ 0 for all combinations of a and R_0.
* ρ+p_r+2p_t ≥ 0 for all combinations of a and R_0.
* df/dR>0 for all combinations of a and R_0.
So we conclude that, if l=+ve, a=+ve, R_0=+ve or l=+ve, a=-ve, R_0=-ve (for r>1.2) and l=-ve, a=-ve, R_0=+ve or l=-ve, a=+ve, R_0=-ve (for r>1.33), then NEC is respected throughout the geometry. But NEC is partially violated at the throat, that is to show the feasible traversable wormhole structure which have small amount of exotic matter at the throat of the wormhole. Again F=df/dR>0 represents the non-spherically symmetric wormhole solution.
§.§ Tsujikawa f(R) gravity model
This model was represented by Tsujikawa <cit.> and it is defined as,
f(R)=R-μ R_0 tanh[R/R_0],
where μ and R_0 are arbitrary constants. The field equations Eq.(<ref>), Eq.(<ref>) and Eq.(<ref>) now reduce to,
ρ= e^-4m/r/r^8 R_0[-e^2m/rr^4[m^2+Hr^4 e^2m/r]R_0+m^2 μ[2m^2 e^-2m/r/r^4 R_0]^2[r^4 R_0 e^2m/r-
8m(m-2r)tanh[2m^2 e^-2m/r/r^4 R_0]]],
p_r= e^-6m/r/r^12R_0^2[e^4m/rr^8[-m^2+ e^2m/rHr^4]R_0^2+m^2 μ[2m^2 e^-2m/r/r^4 R_0]^2[e^4m/rr^8 R_0^2+32m^2(m-2r)^2
[-2+3[2m^2 e^-2m/r/r^4 R_0]^2+8e^2m/rr^4(3m^2-12mr+10r^2)R_0 tanh[2m^2 e^-2m/r/r^4 R_0]]]
and
p_t= e^-4m/r/r^8 R_0[e^2m/rr^4[m^2+Hr^4 e^2m/r]R_0+m^2 μ[2m^2 e^-2m/r/r^4 R_0]^2[-e^2m/rr^4 R_0-
8(m-2r)(m-r)tanh[2m^2 e^-2m/r/r^4 R_0]]]
Tsujikawa described that μ∈ (0.905,1) <cit.> to sustain the viability of the model. Whereas for the violation of non-existence theorem of static spherically symmetric wormhole F=df/dR>0. We evaluated the geometric nature of wormhole structure through energy conditions for μ=1.0135. From the following graphs FIG(<ref>) of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR, we get the analysis as,
* If μ=+ve, R_0=+ve or μ=+ve, R_0=-ve, then ρ+p_r ≥ 0.
* If μ=+ve, R_0=+ve or μ=+ve, R_0=-ve, then ρ+p_t ≥ 0.
* ρ≤ for all combinations of μ and R_0.
* ρ+p_r+2p_t ≥ 0 for all combinations of μ and R_0.
* df/dR>0 for μ=-ve, R_0=+ve or μ=-ve,R_0=-ve and df/dR<0 for μ=+ve, R_0=+ve or μ=+ve, R_0=-ve.
We can conclude that, for μ=+ve, R_0=+ve or μ=+ve, R_0=-ve the NEC is respected by the exponential wormhole metric and F=df/dR<0, but at the same time WEC and SEC is partially violated as ρ<0. So for these particular combinations of μ and R_0, we get the wormhole solution which violates the non-existence theorem with the presence of minimal amount ofexotic matter.
§.§ Gogoi-Goswami f(R) gravity model
It is a new viable f(R) gravity model, constructed by Gogoi and Goswami <cit.>. This model is defined as,
f(R)=R-a/πR_0 ^-1(R_0^2/R^2)-μ R_0[1-e^-R/R_0],
where a and μ are two dimensionless constants and R_0 is a characteristic curvature constant having dimensions same as curvature scalar R. The allowed range for a is -1.68381<a<0.367545. Now from the plots FIG(<ref>) and FIG(<ref>) of ρ, ρ+p_r, ρ+p_t, ρ+p_r+2p_t and F=df/dR, we get the following analysis,
If μ =+ve
* If a=-ve, R_0=-ve, then ρ+p_r ≥ 0 (also for a=+ve, R_0=+ve if r>1.55).
* If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_t ≥ 0.
* ρ≤ 0, for all combinations of a and R_0.
* ρ+ p_r+2p_t ≥ 0, for all combinations of a and R_0.
* df/dR>0 for a=+ve, R_0=+ve or a=-ve, R_0=-ve and df/dR<0 for a=+ve, R_0=-ve or a=-ve, R_0=+ve.
If μ =-ve
* ρ+p_r ≤ 0, for all combinations of a and R_0.
* If a=+ve, R_0=+ve or a=-ve, R_0=-ve, then ρ+p_t ≥ 0.
* ρ≤ 0, for all combinations of a and R_0.
* ρ+ p_r+2p_t ≥ 0, for all combinations of a and R_0.
* df/dR>0 for all combinations of a and R_0.
From these we can conclude that, NEC is respected in the exponential wormhole geometry only if μ=+ve, a=-ve, R_0=-ve. But the WEC and the SEC are partially violated throughout the space-time, which represents that the wormhole structure has a normal matter at the throat. Again F=df/dR>0, indicates the non-spherical symmetry of the wormhole solution. Similar results can be achieved for μ=+ve, a=+ve, R_0=+ve but only if r>1.55. That is for this specific combination, NEC is violated at the throat which shows that the wormhole contains a small amount of exotic matter near the throat.
§ RESULTS AND DISCUSSION
In this paper, we have carried out a comparative study of the so called "exponential" wormhole metric in General Relativity and modified f(R) theory of gravity. We have constructed the field equations for this exponential metric for both the cases. In recent years, many researchers studied the Morris-Thorne wormhole with different redshift and shape function in various viable modified theory of gravity, but no one has ever studied the exponential wormhole metric in f(R) gravity.
The radius of the throat comes out as r=m, all the metric components are finite and the diagonal components are non-zero at r=m. In order to be class one (according to Karamarkar condition), m can take values m=0 or m=r or m=3/2r. Out of which m=0 is forbidden due to the flare-out condition. Again m=r represents the throat radius, so the exponential metric acts as a class one static and spherically symmetric line element at the throat. We have also studied this exponential wormhole metric in four viable f(R) gravity model, namely exponential model, Starobinsky model, Tsujikawa model and Gogoi-Goswami f(R) gravity model. The results obtained from the study in General Relativity and those viable f(R) gravity models are as follows,
in General Relativity,
* From the field equations in General Relativity, we have examined the energy conditions and it is found that ρ+p_r<0, ρ+p_t=0, ρ<0 and ρ+p_r+2p_t=0, that is the NEC, WEC and SEC are violated throughout the space-time indicating the presence of exotic matter.
* The exponential wormhole metric obeys the flare-out condition everywhere and it doesnot possess any kind of singularity.
* All the curvature components and the scalar invariants are finite everywhere, they are finite at the throat and decay to zero as r→∞ and as r→ 0.
* The exponential wormhole metric violates the null Ricci Convergence condition which is important for the better understanding of the flare-out conditions.
* The exponetial wormhole metric gives the scalar field equation of motion (g^ab∇_a ∇_b)Φ=0 and we can derive Einstein equation for a negative kinetic energy massless scalar field or phantom field, which is a evidence that the exponential wormhole metric represents a traversable wormhole
In modified f(R) theory of gravity, we have obtained the field equations and also studied the energy conditions in four viable f(R) gravity model. The function f(R) has some free parameters or constants in these viable models. Some specific combinations of these parameters/constants show some interesting results in energy conditions which are quite different from those in General Relativity.
* In case of Exponential f(R) gravity model, if we consider μ=+ve, R_0=-ve, then ρ+p_r ≥ 0, ρ+p_t ≥ 0 and ρ+p_r+2p_t ≥ 0 (all possible combinations) but ρ≤ 0 (for all possible combinations). We can conclude that in case of exponential f(R) gravity model, the exponential wormhole metric obeys the necessary NEC and F=df/dR<0 but partially violates the WEC and SEC. So μ=+ve, R_0=+ve is the perfect combination in order to get wormhole solution which violates the non-existence theorem with the presence of insignificant amount of exotic matter.
* In case of Starobinsky f(R) gravity model, if l=+ve, a=+ve, R_0=+ve or l=+ve, a=-ve, R_0=-ve (for r>1.2) or l=-ve, a=-ve, R_0=+ve or l=-ve, a=+ve, R_0=-ve (for r>1.33), then ρ+p_r ≥ 0, ρ+p_t ≥ 0, ρ+p_r+2 p_t ≥ 0 (for all possible combinations) but ρ≤ 0 (for all possible combinations). So for these combinations of l, a and R_0, NEC is respected outside the throat and F=df/dR>0, which signifies that the wormhole has a non-spherical symmetry and the throat is filled with a small amount of exotic matter.
* In case of Tsujikawa f(R) gravity model, if μ=+ve, R_0=+ve or μ=+ve, R_0=-ve, then ρ+p_r ≥ 0, F=df/dR<0 and ρ+p_t≥ 0, ρ+p_r+2p_t ≥ 0 (for all possible combinations) but for all possible combinations of μ and R_0, ρ≤ 0. While WEC and SEC is violated throughout the space-time. So μ=+ve, R_0=+ve or μ=+ve, R_0=-ve are the perfect combinations to get traversable wormhole in Tsujikawa f(R) gravity model with negligible amount of exotic matter.
* In case of Gogoi-Goswami f(R) gravity model, ρ+p_r and ρ+p_t≥0 for μ=+ve, a=-ve, R_0=-ve. But ρ+p_r+2p_t ≥ 0 and ρ≤ 0 for all possible combinations of μ, a and R_0. So NEC is respected for μ=+ve, a=-ve, R_0=-ve, while WEC and SEC are violated for all possible combinations. Again for these specific combinations of μ, a and R_0, F=df/dR>0, which implies the non-spherical symmetry of the wormhole with the normal matter present at the throat. Again if μ=+ve, a=+ve, R_0=+ve, the wormhole again has non-spherical symmetry but this time the throat contains the exotic matter. While the NEC is respected just outside the wormhole throat.
Thus in comparison with General Relativity, the exponential wormhole metric obeys the necessary NEC at the throat in modified f(R) gravity model for some particular combinations of the free parameters/constants. So the exponential wormhole metric could form a traversable wormhole geometry with negligible amount of exotic matter.
§ ACKNOWLEDGEMENT
This work is supported by University Grants Commission, Ministry of Education, Govt. of India(NFOBC No.F. 82-44/2020(SA-III)) under the scheme NFOBC programme.
ieeetr
|
http://arxiv.org/abs/2307.06129v1 | 20230712123349 | Channel Estimation for Beyond Diagonal Reconfigurable Intelligent Surfaces with Group-Connected Architectures | [
"Hongyu Li",
"Yumeng Zhang",
"Bruno Clerckx"
] | eess.SP | [
"eess.SP",
"cs.IT",
"math.IT"
] |
Evaluating DNS Resiliency and Responsiveness with Truncation, Fragmentation & DoTCP Fallback
Pratyush Dikshit1,
Mike Kosek2,
Nils Faulhaber2,
Jayasree Sengupta1, and
Vaibhav Bajpai1
1CISPA Helmholtz Center for Information Security, Germany
2Technical University of Munich, Germany
===========================================================================================================================================================================================================
We study channel estimation for a beyond diagonal reconfigurable intelligent surface (BD-RIS) aided multiple input single output system.
We first describe the channel estimation strategy based on the least square (LS) method, derive the mean square error (MSE) of the LS estimator, and formulate the BD-RIS design problem that minimizes the estimation MSE with unique constraints induced by group-connected architectures of BD-RIS.
Then, we propose an efficient BD-RIS design which theoretically guarantees to achieve the MSE lower bound.
Finally, we provide simulation results to verify the effectiveness of the proposed channel estimation scheme.
Beyond diagonal reconfigurable intelligent surfaces, channel estimation, least square.
§ INTRODUCTION
Beyond diagonal reconfigurable intelligent surface (BD-RIS) is a recently emerged technique, which goes beyond conventional RIS with diagonal phase shift matrices <cit.> and generates scattering matrices not limited to being diagonal, by introducing connections among RIS elements at the expense of increasing circuit complexity <cit.>.
Thanks to the flexible inter-element connections, BD-RIS has benefits in providing smarter wave manipulation and enlarging coverage <cit.>.
Existing works have been carried out for the modeling <cit.>, beamforming design <cit.>, and mode/architecture design <cit.> of BD-RIS.
The modeling of BD-RIS, involving the concept of group- and fully-connected architectures which are named according to the circuit topology of inter-element connections, is first proposed in <cit.>, followed by the discrete-value design <cit.> and optimal beamforming design <cit.>.
Inspired by the group/fully-connected architectures <cit.> and the concept of intelligent omni surface (IOS) with enlarged coverage <cit.>, BD-RIS with hybrid <cit.> and multi-sector modes <cit.> are proposed to achieve full-space coverage with enhanced performance.
To find a better performance-complexity trade-off of BD-RIS, other architectures have also been investigated <cit.>. Specifically, BD-RIS with dynamically group-connected architecture, where the grouping strategy is adaptive to the channel state information (CSI), is proposed in <cit.> and has better performance than fixed group-connected BD-RIS <cit.>.
Meanwhile, <cit.> constructs BD-RIS with non-diagonal phase shift matrices relying on asymmetric circuit design to achieve higher channel gain than conventional RIS.
In addition, BD-RIS with tree- and forest-connected architectures are proposed in <cit.>, which are proved to achieve the performance upper bound with minimum circuit complexity.
Motivation: The motivation of this work is twofold.
1) The enhanced performance achieved by BD-RIS with different architectures highly depends on accurate CSI, while none of the above-mentioned works <cit.> study the channel estimation/acquisition of BD-RIS.
2) Although the channel estimation protocol for conventional RIS scenarios <cit.> still works in BD-RIS cases, the design of the RIS pattern for uplink training should be re-considered due to the different constraints on the scattering matrix.
Contributions: The contributions of this work are summarized as follows.
First, we propose a novel channel estimation scheme for a BD-RIS aided multiple input single output (MISO) system relying purely on the variation of BD-RIS matrix.
Second, we derive the lower bound of the mean square error (MSE) of the least square (LS) estimator and propose an efficient BD-RIS design to achieve the MSE lower bound.
Third, we present simulation results to verify the effectiveness and accuracy of the proposed channel estimation scheme.
Notations:
(·)^T, (·)^*, (·)^H, and (·)^-1 denote the transpose, conjugate, conjugate-transpose, and inversion operations, respectively.
⊗ and ⊙ denote the Kronecker product and Hadamard product, respectively.
𝖻𝗅𝗄𝖽𝗂𝖺𝗀(·) is a block-diagonal matrix.
𝗏𝖾𝖼(·), 𝗋𝖺𝗇𝗄(·), and 𝗍𝗋(·), respectively, are the vectorization, rank, and trace of a matrix.
𝗏𝖾𝖼(·) reshapes the vectorized matrix into the original matrix.
𝖼𝗂𝗋𝖼𝗌𝗁𝗂𝖿𝗍(𝐚,N) rearranges vector 𝐚 by moving the final N entries to the first N positions.
𝗆𝗈𝖽(M,N) denotes M modulo N.
§ SYSTEM MODEL
We consider a narrowband system which consists of an N-antenna base station (BS), an M-antenna BD-RIS, and a single-antenna user, as illustrated in Fig. <ref>.
The M antennas of the BD-RIS are connected to an M-port group-connected reconfigurable impedance network <cit.>, where the M ports are uniformly divided into G groups with each containing M̅ = M/G ports connected to each other, ℳ̅={1,…,M̅}.
Mathematically, the BD-RIS with group-connected architecture has a block-diagonal scattering matrix Φ = 𝖻𝗅𝗄𝖽𝗂𝖺𝗀(Φ_1,…,Φ_G)∈ℂ^M× M with each block Φ_g∈ℂ^M̅×M̅ satisfying Φ_g^HΦ_g = 𝐈_M̅, ∀ g∈𝒢 = {1,…,G} <cit.>[When M̅=1, the BD-RIS has a single-connected architecture and boils down to conventional RIS and the proposed channel estimation scheme is the same as the passive channel estimation for conventional RIS <cit.>.].
In this work, we assume the direct user-BS channel is blocked and focus purely on the estimation of the cascaded user-RIS-BS channel[When the direct BS-user channel exists, the direct channel can be effectively obtained by turning off the BD-RIS and using conventional channel estimation strategies.].
Let 𝐆∈ℂ^N× M and 𝐡∈ℂ^M denote the channel between the BD-RIS and BS, and between the user and BD-RIS, respectively. The user-RIS-BS channel 𝐡_𝗋∈ℂ^N is
𝐡_𝗋 = 𝐆Φ𝐡 = ∑_g∈𝒢𝐆_gΦ_g𝐡_g = ∑_g∈𝒢𝐡_g^T⊗𝐆_g_=𝐐_g∈ℂ^N×M̅^2𝗏𝖾𝖼(Φ_g),
where 𝐆_g = [𝐆]_:,(g-1)M̅+1:gM̅∈ℂ^N×M̅ and 𝐡_g = [𝐡]_(g-1)M̅+1:gM̅∈ℂ^M̅, ∀ g∈𝒢.
Expression (<ref>) indicates that the scattering matrix of BD-RIS with group-connected architecture is related to the cascaded channel 𝐐 = [𝐐_1,…,𝐐_G]∈ℂ^N×M̅^2G. This motivates us to directly estimate 𝐐 instead of separate channels and perform the beamforming design with the knowledge of 𝐐 for data transmission, which results in the following protocol <cit.> with each transmission frame divided into three phases as illustrated in Fig. <ref>.
Phase 1: The BS estimates 𝐐 by uplink training, where the pilots are consecutively transmitted from the user and reflected by the BD-RIS with varied scattering matrix Φ during the training period. In this phase, the difference compared to conventional RIS comes from the design of the varied RIS pattern due to the different constraint of the scattering matrix, whose details will be given in Section <ref>.
Phase 2: The BS optimizes the transmit precoder and BD-RIS matrix based on the estimated cascaded channel, and feeds back the results to the user and BD-RIS.
This phase is also different from conventional RIS cases due to the more general unitary constraint of the BD-RIS.
Specifically, with determined cascaded channel 𝐐, the received power maximization problem to jointly design the transmit precoder 𝐰∈ℂ^N and BD-RIS matrix Φ for the downlink MISO is
max_Φ,𝐰 |∑_g∈𝒢𝗏𝖾𝖼^H(Φ_g)𝐐_g^H𝐰|^2
s.t. Φ_g^HΦ_g = 𝐈_M̅, ∀ g∈𝒢, 𝐰^2 ≤ P,
where P denotes the transmit power.
The main difficulty in solving problem (<ref>) lies in the non-convex constraint (<ref>), though strategies in <cit.> can be used to tackle the problem[In this paper, we focus on Phase 1 and leave Phases 2 and 3 for the journal extension.].
Phase 3: The BS performs the downlink data transmission based on the optimized precoder and BD-RIS matrix.
Remark 1.
The feasibility of this protocol is supported by the following aspects. 1) The communication occurs with time division duplex (TDD) and the reciprocity between the uplink and downlink channel exists. 2) The user-RIS-BS channel remains approximately constant within one transmission frame. 3) The knowledge of the cascaded channel is sufficient for beamforming design as formulated in Phase 2.
§ CHANNEL ESTIMATION FOR BD-RIS
In this section, we focus on Phase 1 and describe the proposed channel estimation strategy based on the LS method, formulate the BD-RIS design problem to minimize the MSE of the LS estimator, and provide the solution.
§.§ LS Based Channel Estimation
The channel estimation strategy is described as follows. Assuming the user sends pilot symbol x_t∈ℂ, |x_t|=1 at time slot t, ∀ t∈𝒯 = {1,…,T}, the signal received at the BS is
𝐲_t = √(P_u)∑_g∈𝒢𝐐_g𝗏𝖾𝖼(Φ_t,g)x_t + 𝐧_t
= √(P_u)𝐐ϕ_t + 𝐧_t, ∀ t∈𝒯,
where P_u denotes the transmit power at the user, Φ_t,g∈ℂ^M̅×M̅ denotes the g-th block of the BD-RIS matrix at time slot t, and 𝔫_t ∈ℂ^N with 𝐧_t∼𝒞𝒩(0,σ^2𝐈_N), ∀ t∈𝒯 denotes the noise.
The vector ϕ_t is defined as ϕ_t = [𝗏𝖾𝖼^T(Φ_t,1),…,𝗏𝖾𝖼^T(Φ_t,G)]^Tx_t∈ℂ^GM̅^2, ∀ t∈𝒯.
To uniquely estimate the cascaded channel 𝐐 in (<ref>), T pilot symbols should be transmitted from the user. Without loss of optimality, we assume x_t = 1, ∀ t∈𝒯 and combine the data from such pilots together, which yields
𝐘 = [𝐲_1,…,𝐲_T]
= √(P_u)𝐐[ϕ_1,…,ϕ_T]_=Φ∈ℂ^GM̅^2× T + [𝐧_1,…,𝐧_T]_=𝐍∈ℂ^N× T.
The simplest way to estimate 𝐐 is to use the LS method, yielding the LS estimator of 𝐐 as
𝐐 = (√(P_u))^-1𝐘Φ^† = (√(P_u))^-1𝐘Φ^H(ΦΦ^H)^-1,
with T≥ GM̅^2 to guarantee the recovery of 𝐐.
Then the MSE of the LS estimator is
e_𝐐 = 𝔼{𝐐 - 𝐐_F^2} = Nσ^2/P_u𝗍𝗋((ΦΦ^H)^-1),
which implies that the MSE of channel estimation depends purely on the value of matrix Φ such that a proper design of Φ is required. As such, we formulate the following MSE minimization problem
min_Φ 𝗍𝗋((ΦΦ^H)^-1)
s.t. Φ_t,g^HΦ_t,g = 𝐈_M̅, ∀ t∈𝒯,∀ g∈𝒢,
𝗋𝖺𝗇𝗄(Φ) = GM̅^2,
where we set T as T^min=GM̅^2 to minimize the overhead without corrupting the MSE performance, which yields Φ a full-rank square matrix.
Problem (<ref>) is difficult to solve due to the inverse operation in the objective and the non-convex constraints of group-connected BD-RIS. In the following subsection, we will first simplify the objective function and then propose an efficient approach for optimal Φ.
§.§ Solution to Problem (<ref>)
We start by deriving the lower bound of the objective (<ref>) with constraint (<ref>) based on the following lemma.
Lemma 1.
The objective (<ref>) has the following lower bound 𝗍𝗋((ΦΦ^H)^-1) ≥M̅, where the equality achieves when ΦΦ^H = Φ^HΦ= M𝐈_GM̅^2.
Proof.
The objective (<ref>) has the following lower bound
𝗍𝗋((ΦΦ^H)^-1) = ∑_i=1^GM̅^2[(ΦΦ^H)^-1]_i,i(a)≥∑_i=1^GM̅^21/[ΦΦ^H]_i,i,
where the equality of (a) can be attained when ΦΦ^H is a diagonal matrix <cit.>. Additionally, we have
∑_i=1^GM̅^21/[ΦΦ^H]_i,i (b)≥G^2M̅^4/∑_i=1^GM̅^2[ΦΦ^H]_i,i
= G^2M̅^4/𝗍𝗋(ΦΦ^H) = G^2M̅^4/𝗍𝗋(Φ^HΦ)(c)=M̅,
where (b) holds due to the relationship between the harmonic mean and the arithmetic mean with equality achieved when [ΦΦ^H]_1,1=…=[ΦΦ^H]_GM̅^2,GM̅^2; (c) holds due to the constraint (<ref>) which yields [Φ^HΦ]_1,1 = … = [Φ^HΦ]_GM̅^2,GM̅^2 = M. Combining (<ref>) and (<ref>), we can achieve the lower bound 𝗍𝗋((ΦΦ^H)^-1) = M̅ with ΦΦ^H = Φ^HΦ=M𝐈_GM̅^2, which completes the proof.
□
With Lemma 1, we can transform problem (<ref>) into the following feasibility-check problem
find Φ∈ℂ^GM̅^2× GM̅^2
s.t. ΦΦ^H = M𝐈_GM̅^2,
(<ref>).
Given the large dimension of matrix Φ, it is difficult to directly find such a matrix to simultaneously satisfy the two constraints in problem (<ref>). To reduce the dimension of designed variables and simplify the design of matrix Φ, we give the following lemma to further transform problem (<ref>).
Lemma 2.
The feasible matrix Φ from problem (<ref>) can be constructed as Φ = 𝐗⊗Φ̅, where 𝐗∈ℂ^G× G is obtained by solving the following problem:
find 𝐗∈ℂ^G× G
s.t. 𝐗𝐗^H = G𝐈_G,
|[𝐗]_g,g'| = 1, ∀ g,g'∈𝒢.
Φ̅∈ℂ^M̅^2×M̅^2 is obtained by solving the following problem:
find Φ̅∈ℂ^M̅^2×M̅^2
s.t. Φ̅Φ̅^H = M̅𝐈_M̅^2,
𝗏𝖾𝖼^H([Φ̅]_:,m)𝗏𝖾𝖼([Φ̅]_:,m) = 𝐈_M̅, ∀ m∈ℳ̅̅̅,
where ℳ̅̅̅ = {1,…,M̅^2}.
Proof.
With 𝐗 satisfying (<ref>), (<ref>) and Φ̅ satisfying (<ref>), (<ref>), we have Φ = 𝐗⊗Φ̅ such that
ΦΦ^H = (𝐗⊗Φ̅)(𝐗^H⊗Φ̅^H) = (𝐗𝐗^H)⊗(Φ̅Φ̅^H) = (G𝐈_G)⊗(M̅𝐈_M̅^2) = M𝐈_GM̅^2,
which aligns with (<ref>).
In addition, each block of Φ, i.e., Φ_g,g' = [Φ]_(g-1)M̅^2+1:gM̅^2,(g'-1)M̅^2+1:g'M̅^2 = [𝐗]_g,g'Φ̅, ∀ g,g'∈𝒢, is constructed by columns [Φ_g,g']_:,m = 𝗏𝖾𝖼(Φ_(g'-1)M̅^2+m,g) = 𝗏𝖾𝖼(Φ_t,g) satisfying (<ref>), that is,
Φ_t,g^HΦ_t,g =𝗏𝖾𝖼^H([Φ_g,g']_:,m)𝗏𝖾𝖼([Φ_g,g']_:,m)
= |[𝐗]_g,g'|^2×𝗏𝖾𝖼^H([Φ̅]_:,m) 𝗏𝖾𝖼([Φ̅]_:,m)=𝐈_M̅.
The proof is completed.
□
With Lemma 2, we decouple problem (<ref>) into two reduced-dimensional problems (<ref>) and (<ref>).
The feasible solution to problem (<ref>) can be easily obtained by using a G× G discrete Fourier transform (DFT) matrix 𝐅_G or a Hadamard matrix 𝐃_G, i.e., 𝐗 = 𝐅_G or 𝐗 = 𝐃_G. However, the solution to problem (<ref>) is not that straightforward since we need to find a matrix Φ̅ such that each column constructs a unitary matrix, i.e., constraint (<ref>) and that different columns are orthogonal to each other, i.e., constraint (<ref>).
Therefore, the matrix Φ̅ should include two-dimensional orthogonality, which motivates us to construct Φ̅ with two orthogonal bases.
This brings the following theorem.
Theorem 1.
The matrix Φ̅ satisfying (<ref>), (<ref>) can be constructed such that each column, i.e., [Φ̅]_:,(m-1)M̅+n = ϕ̅_m,n, ∀ m,n∈ℳ̅, has the following structure:
ϕ̅_m,n = 𝖼𝗂𝗋𝖼𝗌𝗁𝗂𝖿𝗍(𝗏𝖾𝖼(𝐙_1),(n-1)M̅)⊙([𝐙_2]_:,m⊗1_M̅),
where 𝐙_1∈ℂ^M̅×M̅ is a scaled unitary matrix, i.e., 𝐙_1^H𝐙_1=α_1𝐈_M̅, 𝐙_2∈ℂ^M̅×M̅ is a scaled unitary matrix whose entries have identical modulus, i.e., 𝐙_2^H𝐙_2=α_2𝐈_M̅, |[𝐙_2]_m,n|=√(α_2/M̅), ∀ m,n∈ℳ̅, and α_1α_2 = M̅.
Proof.
We start by proving ϕ̅_m,n satisfies (<ref>). To this end, we block ϕ̅_m,n = [ϕ̅_m,n,1^H,ϕ̅_m,n,1^H,…,ϕ̅_m,n,M̅^H]^H with ϕ̅_m,n,i = [ϕ̅_m,n]_(i-1)M̅+1:iM̅ = [𝐙_2]_i,m[𝐙_1]_:,𝗆𝗈𝖽(i-n,M̅)+1 and calculate
ϕ̅_m,n,i^Hϕ̅_m,n,i'
= [𝐙_2]_i,m^*[𝐙_2]_i',m_=z_2,i,i',m[𝐙_1]_:,𝗆𝗈𝖽(i-n,M̅)+1^H[𝐙_1]_:,𝗆𝗈𝖽(i-n,M̅)+1_=z_1,i.i',n,
yielding the following two conditions:
1) when i=i', we have z_1,i,i,n = α_1 and z_2,i,i,m = α_2/M̅ such that ϕ̅_m,n,i^H×ϕ̅_m,n,i'=1;
2) when i i', we have z_1,i,i',n = 0 such that ϕ̅_m,n,i^Hϕ̅_m,n,i' = 0. Therefore, (<ref>) is guaranteed.
We next prove Φ̅ constructed by ϕ̅_m,n satisfies (<ref>). To this end, we calculate
ϕ̅_m,n^Hϕ̅_m',n'
= ∑_i=1^M̅ϕ̅_m,n,i^Hϕ̅_m',n',i
= ∑_i=1^M̅[𝐙_2]_i,m^*[𝐙_2]_i,m'_=z_2',m,m',i[𝐙_1]_:,𝗆𝗈𝖽(i-n,M̅)+1^H[𝐙_1]_:,𝗆𝗈𝖽(i-n',M̅)+1_=z_1',n,n',i,
yielding the following three conditions:
1) when m=m' and n=n', we have z_1',n,n,i = α_1 and z_2',m,m,i = α_2/M̅ such that ϕ̅_m,n^Hϕ̅_m,n = M̅;
2) when n n', we have z_1',n,n',i =0 such that ϕ̅_m,n^Hϕ̅_m',n' = 0;
3) when m m' and n=n', we have z_1',n,n,i = α_1 such that ϕ̅_m,n^Hϕ̅_m',n' = α_1∑_i=1^M̅z_2',m,m',i = α_1[𝐙_2]_:,m^H[𝐙_2]_:,m' = 0. Therefore, (<ref>) is guaranteed. The proof is completed.
□
According to Theorem 1, we can simply choose 𝐙_1 = 𝐅_M̅ or 𝐙_1 = 𝐃_M̅ and 𝐙_2 = 1/√(M̅)𝐅_M̅ or 𝐙_2 = 1/√(M̅)𝐃_M̅ to construct Φ̅, and further construct Φ = 𝐗⊗Φ̅ with 𝐗 = 𝐅_G or 𝐗=𝐃_G. In this way, we achieve the MSE lower bound of the LS estimation, i.e.,
e_𝐐^min = Nσ^2/P_uM̅ and avoid the time-consuming inversion operation with Φ^† = 1/MΦ^H.
§.§ Discussion
The minimum overhead to estimate 𝐐 is given by T^min = GM̅^2, which depends both on the number of groups G and on the group size M̅ of the group-connected BD-RIS. More specifically, T^min grows faster with M̅ than with G. Meanwhile, the estimation error e^min_𝐐 increases with M̅. These two observations indicate that the proposed estimation method is more appealing with a small group size M̅.
§ PERFORMANCE EVALUATION
In this section, we perform simulation results to verify the effectiveness of the proposed channel estimation scheme.
The simulation parameters are set as follows.
The BS is equipped with N=4 antennas. The BD-RIS is equipped with M = 32 antennas.
Both BS-RIS and RIS-user channels are assumed to have Rayleigh fading accounting for the small-scale fading and distance-dependent model accounting for the large-scale fading, i.e., ζ_o=ζ_0(d_i/d_0)^-ε, ∀ o∈{BI,IU}. ζ_0 = -30 dB is the signal attenuation at reference distance d_0 = 1 m. BS-RIS and RIS-user distances are respectively set as d_BI = 50 m and d_IU = 10 m. The pathloss component is set as ε = 2.2. The noise power is set as σ^2 = -90 dBm.
We plot the MSE versus transmit power for conventional (single-connected) RIS and BD-RIS with group- and fully-connected (M̅=M) architectures in Fig. <ref>, from which we have the following observations.
First, the matrix Φ with DFT/Hadamard bases achieves exactly the same MSE performance as the theoretical lower bound, which verifies Lemma 2 and Theorem 1.
Second, the MSE increases with the group size M̅ of group-connected BD-RIS, which also verifies Lemma 1. This observation indicates that there is a trade-off between the achievable performance and channel estimation accuracy for BD-RIS.
Third, the MSE decreases with transmit power, which indicates that larger transmit power leads to smaller channel estimation error.
Fourth, the matrix Φ with DFT/Hadamard bases achieves better MSE performance than that constructed by random unitary matrices, which demonstrates the effectiveness of the proposed BD-RIS design for channel estimation.
§ CONCLUSION
In this paper, we propose a novel channel estimation scheme for a BD-RIS aided MISO system. Specifically, the BD-RIS has a group-connected architecture, which, mathematically, generates unique constraints and complicates the BD-RIS design and thus the channel estimation. To tackle this difficulty, we first derive the MSE lower bound of the LS estimator. Then, we propose an efficient BD-RIS design to achieve the MSE lower bound.
Finally, simulation results demonstrate the superiority of the proposed design.
In the near future, it is worth investigating more efficient channel estimation schemes for BD-RIS, with smaller estimation error and lower overhead.
-0.01 in
IEEEbib
|
http://arxiv.org/abs/2307.04386v1 | 20230710074506 | Counterfactual Explanation for Fairness in Recommendation | [
"Xiangmeng Wang",
"Qian Li",
"Dianer Yu",
"Qing Li",
"Guandong Xu"
] | cs.IR | [
"cs.IR"
] |
Equal contribution.
[email protected]
0000-0003-3643-3353
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
[1]
Corresponding author: [email protected]
[email protected]
0000-0002-8308-9551
School of Electrical Engineering Computing and Mathematical Sciences, Curtin University
Perth
Australia
[email protected]
0000-0001-6376-9667
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
[email protected]
0000-0003-3370-471X
Hong Kong Polytechnic University
Hong Kong
Corresponding author: [email protected]
[email protected]
0000-0003-4493-6663
Data Science and Machine Intelligence Lab, University of Technology Sydney
Sydney
Australia
Fairness-aware recommendation eliminates discrimination issues to build trustworthy recommendation systems.
Explaining the causes of unfair recommendations is critical, as it promotes fairness diagnostics, and thus secures users' trust in recommendation models.
Existing fairness explanation methods suffer high computation burdens due to the large-scale search space and the greedy nature of the explanation search process.
Besides, they perform score-based optimizations with continuous values, which are not applicable to discrete attributes such as gender and race.
In this work, we adopt the novel paradigm of counterfactual explanation from causal inference to explore how minimal alterations in explanations change model fairness, to abandon the greedy search for explanations.
We use real-world attributes from Heterogeneous Information Networks (HINs) to empower counterfactual reasoning on discrete attributes.
We propose a novel Counterfactual Explanation for Fairness (CFairER) that generates attribute-level counterfactual explanations from HINs for recommendation fairness.
Our CFairER conducts off-policy reinforcement learning to seek high-quality counterfactual explanations, with an attentive action pruning reducing the search space of candidate counterfactuals.
The counterfactual explanations help to provide rational and proximate explanations for model fairness, while the attentive action pruning narrows the search space of attributes.
Extensive experiments demonstrate our proposed model can generate faithful explanations while maintaining favorable recommendation performance.
We release our code at <https://anonymous.4open.science/r/CFairER-anony/>.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010187.10010192</concept_id>
<concept_desc>Computing methodologies Causal reasoning and diagnostics</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010147.10010257.10010258.10010261</concept_id>
<concept_desc>Computing methodologies Reinforcement learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003261.10003271</concept_id>
<concept_desc>Information systems Personalization</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Causal reasoning and diagnostics
[500]Computing methodologies Reinforcement learning
[500]Information systems Personalization
20 February 2007
[revised]12 March 2009
[accepted]5 June 2009
Counterfactual Explanation for Fairness in Recommendation
Guandong Xu
August 12, 2023
=========================================================
§ INTRODUCTION
Recommendation system (RS) as an information filtering tool has been a core in online services, e.g., e-commerce <cit.>.
It helps users discover their preferred items and benefit content providers to profit from item exposures.
Despite the huge benefits, fairness issues refer to unfair allocations (i.e., exposures) of recommended items <cit.>, caused by, e.g., gender discrimination, have attracted increasing attention in RS.
Fairness-aware recommendation <cit.> has emerged as a promising solution to prevent unintended discrimination and unfairness in RS.
It aims to find feasible algorithmic approaches that reduce the fairness disparity of recommendation results.
Explaining why fairness disparity appears, i.e., what causes unfair recommendation results, would enhance the design of fairness-aware recommendation approaches by promoting model transparency and tracking unfair factors.
There are a few fairness explanation studies in the literature, which are mainly categorized as feature-based and aspect-based methods.
Feature-based methods estimate the contribution scores of numerical features that impact model fairness.
For instance, Begley et al. <cit.> explore fairness explanations based on Shapley value estimation for the classification task.
They calculate Shapley values of every input features to reflect their significance and then generate explanations based on calculated values.
However, this method is not applicable for deep recommendation models (e.g., neural networks <cit.>), as the high complexity of Shapley value estimation becomes the major burden when input features are in high dimension and sparse.
Another branch of aspect-based methods mainly perturbs user/item aspect scores and optimizes an explanation model to find perturbed aspects that affect the model fairness as explanations.
For example, Ge et al. <cit.> perturb aspect scores within pre-defined user-aspect and item-aspect matrices and feed the perturbed matrices into a recommendation model.
Those perturbed aspects that alter the fairness disparity of the recommendation model are considered aspect-based explanations.
However, the perturbation space grows exponentially as the number of aspects increases, resulting in a large-scale search space to seek explanations.
The above fairness explanation methods suffer below issues:
1) These feature/aspect-based methods usually incur high computational costs due to the high dimensionality of search space and ultimately result in sub-optimal explanations.
Besides, these methods are presented with the greedy nature of the explanation search process.
They optimize explanation models using greedy feature/aspect scores as significance criteria and select top features/aspects as explanations, which might have the risk of introducing pseudo-explanations.
2) These score-based optimizations can only deal with continuous attributes and thus are not well-suited for handling discrete attributes.
For example, assigning a continuous value, such as gender=0.19, to the discrete gender attribute is impractical in constructing explanations and provides no valuable clue to improve the explanation.
Worse still, discrete attributes are frequently used in real-world recommendation models, as user and item profiles for training models are often generated through data tagging <cit.> on discrete attributes.
For instance, movie recommendations <cit.> usually rely on movies tagged with discrete attributes such as genre, language, and release location.
Consequently, score-based optimizations have limited capability in handling discrete attributes that are frequently encountered in recommendation scenarios.
Unlike previous works, we resort to counterfactual explanations <cit.> derived from causal inference to tackle the above issues.
Counterfactual explanations address the fundamental question: what the model fairness would be if a minimal set of factors (e.g., user/item features) had been different <cit.>.
In other words, they provide “what-if” explanations to determine the most vital and essential (i.e., minimal) factors that change model fairness.
Unlike existing feature/aspect-based methods with greedy explanations, counterfactual explanations have the advantage of always being minimal w.r.t. the generated explanations and are faithful to model fairness changes.
Moreover, we leverage real-world attributes from Heterogeneous Information Networks (HINs) <cit.>, for counterfactual reasoning when dealing with discrete attributes.
In contrast to value-based features and aspects, real-world attributes residing in HINs are presented as discrete nodes, with edges representing their connections.
By utilizing attributes from HINs, we can overcome the limitation of score-based optimizations to directly measure whether the removal of specific attributes changes the model's fairness.
Following the above intuition, we propose to generate attribute-level counterfactual explanations for fairness from a given HIN.
We posit a novel definition of counterfactual explanation for fairness - a minimal set of attributes from the HIN that changes model fairness disparity.
We use a toy example in Figure <ref> to illustrate our idea.
Given a recommendation i_1 for the user u_1 and an external HIN carrying their attributes, we want to know why i_1 causes discrimination in recommendation results.
The counterfactual explanation performs “what-if” reasoning by altering the attributes of u_1 and i_1 and checking the fairness of the recommendation results.
Both E_1 and E_2 are valid candidate explanations since they alter fairness disparities of recommendations (i.e., i_2, i_3) from 0.90 to 0.19.
To determine which attributes are the primary reason for unfairness, the counterfactual explanation will uncover the minimal attribute changes, i.e., E_2, instead of utilizing attribute combinations in E_1.
Thus, we could infer E_2 is the most vital reason for model unfairness.
Besides, since a counterfactual explanation E_2 is minimal, it only reveals the essential attributes (i.e., “Female”) that effectively explain unfairness, while discarding the irrelevant (i.e., pseudo) explanations, i.e., “U.S” and “Discount” in E_1.
We therefore propose a novel Counterfactual Explanation for Fairness (CFairER) within an off-policy reinforcement learning environment to find optimal attribute-level counterfactual explanations.
Particularly, we focus on generating attribute-level counterfactual explanations for item exposure unfairness to promote the fair allocation of user-preferred but less exposed items.
Note that the proposed approach is general and can be utilized in different recommendation scenarios that involve different fairness definitions.
Specifically, we use a reinforcement learning agent in CFairER to optimize a fairness explanation policy by uniformly exploring candidate counterfactuals from a given HIN.
We also devise attentive action pruning over the HIN to reduce the search space of reinforcement learning.
Finally, our CFairER optimizes the explanation policy using an unbiased counterfactual risk minimization objective, resulting in accurate attribute-level counterfactual explanations for fairness.
The contributions of this work are:
* We make the first attempt to leverage rich attributes in a Heterogeneous Information Network to offer attribute-level counterfactual explanations for recommendation fairness.
* We propose an off-policy learning framework to identify optimal counterfactual explanations,
which is guided by an attentive action pruning to reduce the search space.
* We devise a counterfactual risk minimization for off-policy correction, so as to achieve unbiased policy optimization.
* Comprehensive experiments show the superiority of our method in generating trustworthy explanations for fairness while preserving satisfactory recommendation performance.
§ RELATED WORK
§.§ Fairness Explanation for Recommendation
Recommender systems have long dealt with major concerns of recommendation unfairness, which profoundly harm user satisfaction <cit.> and stakeholder benefits <cit.>.
Recent works on fairness-aware recommendation mainly discuss two primary topics, i.e., user-side fairness <cit.> and item-side fairness <cit.>.
User-side fairness concerns whether the recommendation is fair to different users/user groups, e.g., retaining equivalent accuracy or recommendation explainability.
Relevant approaches attribute the causes of user-side unfairness to discrimination factors, such as sensitive features (e.g., gender <cit.>, race <cit.>) and user inactiveness <cit.>, etc.
They mainly propose fairness metrics to constraint recommendation models (e.g., collaborative filtering <cit.>) to produce fair recommendations.
For example, Yao et al. <cit.> study the unfairness of collaborative filtering (CF)-based recommenders on gender-imbalanced data.
They propose four metrics to assess different types of fairness, then add these metrics as constraints to the CF model learning objective to produce fair recommendations.
Li et al. <cit.> investigate the unfair recommendation between active and inactive user groups, and provide a re-ranking approach to mitigate the activity unfairness by adding constraints over evaluation metrics of ranking.
As modern content providers are more concerned about user privacy, it is generally not easy to access sensitive user features for the recommendation <cit.>.
Meanwhile, users often prefer not to disclose personal information that raises discrimination <cit.>.
Thus, another topic of item-side fairness-aware recommendation <cit.> is interested in examining whether the recommendation treats items fairly, e.g., similar ranking prediction errors for different items, fair allocations of exposure to each item.
For instance,
Abdollahpouri et al. <cit.> address item exposure unfairness in learning-to-rank (LTR) recommenders.
They include a fairness regularization term in the LTR objective function, which controls the recommendations favored toward popular items.
Ge et al. <cit.> consider the dynamic fairness of item exposure due to changing group labels of items.
They calculate the item exposure unfairness with a fairness-related cost function.
The cost function is merged into a Markov Decision Process to capture the dynamic item exposure for recommendations.
Liu et al. <cit.> focus on item exposure unfairness in interactive recommender systems (IRS).
They propose a reinforcement learning method to maintain a long-term balance between accuracy and exposure fairness in IRS.
Despite the great efforts, fairness-aware recommendations mitigate user and item unfairness in a black-box manner but do not explain why the unfairness appears.
Understanding the “why” is desirable for both model transparency <cit.> and facilitates data curation to remove unfair factors <cit.>.
Limited pioneering studies are conducted to explain fairness.
Begley et al. <cit.> estimate Shapley values of input features to search which features contribute more to the model unfairness.
Ge et al. <cit.> develop an explainable fairness model for recommendation to explain which item aspects influence item exposure fairness.
They perform perturbations on item aspect scores, then apply perturbed aspect scores on two pre-defined matrices to observe fairness changes.
These prior efforts suffer from major limitations:
1) The high computational burden caused by the large-scale search space and the greedy nature of the explanation search process.
2) They generate explanations by feature <cit.> or aspect <cit.> scores, which do not apply to discrete attributes such as gender and race.
Our work conducts counterfactual reasoning to seek minimal sets of attributes as explanations.
We also reduce the large search space by attentive action pruning in the off-policy learning environment.
Meanwhile, we consider explaining recommendation unfairness based on attributes from a Heterogeneous Information Network, which is expected to be wildly applicable.
§.§ Heterogeneous Information Network in Recommendation
Heterogeneous Information Network (HIN) is a powerful structure that allows for the heterogeneity of its recorded data, i.e., various types of attributes, thus providing rich information to empower recommendations <cit.>.
HINs have been wildly adopted in recommendation models to boost performance;
representative works cover context-based filtering (e.g., SemRec <cit.>, HERec <cit.>) and knowledge-based systems (e.g., MCrec <cit.>, HAN <cit.>).
For instance, HERec <cit.> embeds meta-paths within a HIN as dense vectors, then fuses these HIN embeddings with user and item embeddings to augment the semantic information for recommendations.
MCrec <cit.> leverages a deep neural network to model meta-path-based contextual embeddings and propagates the context to user and item representations with a co-attention mechanism.
Those recommendation models observe promising improvements by augmenting contextual and semantic information given by HINs.
Despite the great efforts, prior works do not consider using the HIN to explain unfair factors in recommendations.
Novel to this work, we first attempt to leverage rich attributes in a HIN to provide counterfactual explanations for item exposure fairness.
§.§ Counterfactual Explanation
Counterfactual explanations have been considered as satisfactory explanations <cit.> and elicit causal reasoning in humans <cit.>.
Works on counterfactual explanations have been proposed very recently to improve the explainability of recommendations.
Xiong et al. <cit.> propose a constrained feature perturbation on item features and consider the perturbed item features as explanations for ranking results.
Ghazimatin et al. <cit.> perform random walks over a Heterogeneous Information Network to look for minimal sets of user action edges (e.g., click) that change the PageRank scores.
Tran et al. <cit.> identify minimal sets of user actions that update the parameters of neural models.
Our work differs from prior works on counterfactual explanations by two key points:
1) In terms of problem definition, they generate counterfactual explanations to explain user behaviors (e.g., click <cit.> ) or recommendation (e.g., ranking <cit.>) results.
Our method generates counterfactual explanations to explain which attributes affect recommendation fairness.
2) In terms of technique, our method formulates counterfactual reasoning as reinforcement learning, which can deal with ever-changing item exposure unfairness.
§ PRELIMINARY
We first introduce the Heterogeneous Information Network that offers real-world attributes for fairness explanation learning.
We then give the key terminologies, including fairness disparity evaluation and counterfactual explanation for fairness.
§.§ Heterogeneous Information Network
Creating fairness explanations requires auxiliary attributes containing possible factors (e.g., user gender) that affect recommendation fairness (cf. Figure <ref>).
Heterogeneous Information Network (HIN) has shown its power in modeling various types of attributes, e.g., user social relations, item brand.
In particular, suppose we have the logged data that records users’ historical behaviors (e.g., clicks) in the recommendation scenario.
Let 𝒰∈ℝ^M, ℐ∈ℝ^N denote the sets of users and items, respectively.
We can define a user-item interaction matrix Y={y_uv| u ∈𝒰, v ∈ℐ} according to the logged data.
We also have additional attributes from external resources that profile users and items, e.g., users' genders, items' genres.
The connections between all attributes and users/items are absorbed in the relation set ℰ.
Those attributes, with their connections with user-item interactions, are uniformly formulated as a HIN.
Formally, a HIN is defined as 𝒢=(𝒱^',ℰ^'), where 𝒱^'=𝒰∪ℐ∪𝒱_U ∪𝒱_I, and ℰ^'= {𝕀(y_uv)}∪ℰ.
𝕀(·) is an edge indicator that denotes the observed edge between user u and item v when y_uv∈Y=1.
𝒱_U and 𝒱_I are attribute sets for users and items, respectively.
Each node n ∈𝒱^' and each edge e ∈ℰ^' are mapped into specific types through node type mapping function: ϕ: 𝒱^'→𝒦 and edge type mapping function: ψ: ℰ^'→𝒥.
𝒢 maintain heterogeneity, i.e., |𝒦|+|𝒥| > 2.
§.§ Fairness Disparity
We consider explaining the item exposure (un)fairness in recommendations.
We first split items in historical user-item interactions into head-tailed (i.e., popular) group G_0 the long-tailed group G_1 [Following <cit.>, we consider the top 20% items with the most frequent interactions with users as G_0, while the remaining 80% belongs to G_1.].
Following previous works <cit.>, we use demographic parity (DP) and exact-K (EK) defined on item subgroups to measure whether a recommendation result is fair.
In particular, DP requires that each item has the same likelihood of being classified into G_0 and G_1.
EK regulates the item exposure across each subgroup to remain statistically indistinguishable from a given maximum α.
By evaluating the deviation of recommendation results from the two fairness criteria, we can calculate the fairness disparity, i.e., to what extent the recommendation model is unfair.
Formally, giving a recommendation result H_u, K, the fairness disparity Δ(H_u, K) of H_u, K is:
[ Δ(H_u, K)=|Ψ_D P|+λ|Ψ_E K| ,; Ψ_D P=|G_1| · Exposure (G_0| H_u, K)-|G_0| · Exposure (G_1| H_u, K),; Ψ_E K= α· Exposure (G_0| H_u, K)- Exposure (G_1| H_u, K) ]
where Δ(·) is the fairness disparity metric that quantifies model fairness status.
λ is the trade-off parameter between DP and EK.
Exposure(G_j| H_u, K) is the item exposure number of H_u, K within G_j w.r.t. j ∈{0,1}.
§.§ Counterfactual Explanation for Fairness
This work aims to generate attribute-level counterfactual explanations for item exposure fairness.
In particular, we aim to find the “minimal” changes in attributes that reduce the fairness disparity (cf. Eq. (<ref>)) of item exposure.
Formally, given historical user-item interaction Y={y_uv| u ∈𝒰, v ∈ℐ}, and user attribute set 𝒱_U and item attribute set 𝒱_I extracted from an external Heterogeneous Information Network (HIN) 𝒢=(𝒱^',ℰ^').
Suppose there exists a recommendation model that produces the recommendation result H_u, K for user u.
Given all user-item pairs (u,v) in H_u, K,
our goal is to find a minimal attributes set 𝒱^*⊆{{e_u, e_v}| (u, e_u), (v, e_v) ∈ℰ^', e_u ∈𝒱_U, e_v ∈𝒱_I}.
Each attribute in 𝒱^* is an attribute entity from HIN 𝒢, e.g., user's gender, item's genre.
With a minimal set of 𝒱^*, the counterfactual reasoning pursues to answer: what the fairness disparity would be, if 𝒱^* is applied to the recommendation model.
𝒱^* is recognized as a valid counterfactual explanation for fairness, if after applied 𝒱^*, the fairness disparity of the intervened recommendation result Δ(H_u, K^cf) reduced compared with original Δ(H_u, K).
In addition, 𝒱^* is minimal such that there is no smaller set 𝒱^*^'∈𝒢 satisfying |𝒱^*^'| < |𝒱^*| when 𝒱^*^' is also valid.
§ THE CFAIRER FRAMEWORK
We now introduce the framework of our Counterfactual Explanation for Fairness (CFairER).
As shown in Figure <ref>, CFairER devises three major components:
1) graph representation module embeds users, items, and attributes among HIN as embedding vectors;
2) recommendation model learns user and item latent factors to produce recommendation results and
3) our proposed counterfactual fairness explanation (CFE) model assisted by the graph representation module and the recommendation model to conduct counterfactual reasoning.
This section discusses how the CFE model collaborates with the other two components, then introduces the graph representation module and the recommendation model.
We will elaborate on our proposed CFE model in the next section.
§.§ Counterfactual Fairness Explanation Model
As shown in Figure <ref>, our CFE model is crafted within an off-policy learning environment, in which an explanation policy π_E is optimized to produce attribute-level counterfactual explanations for fairness.
At each state s_t, π_E produces actions a_t absorbing user and item attributes as potential counterfactual explanations.
These actions are committed to the recommendation model and graph representation module to produce the reward r(s_t, a_t) for optimizing π_E.
Specifically, the graph representation module provides dense vectors 𝐡_u, 𝐡_v, 𝐞_u and 𝐞_v as user, item, user attribute and item attribute embeddings, respectively.
Those embeddings are used in the state representation learning (i.e., learn s_t) and attentive action pruning (i.e., select a_t) in our CFE model.
Moreover, the attribute embeddings are fused with user or item latent factors learned by the recommendation model to explore the model fairness change.
In particular, the fused embeddings of users and items are used to predict the intervened recommendation result H_u, K^cf.
By comparing the fairness disparity (cf. Eq. (<ref>)) difference between H_u, K^cf and the original recommendation H_u, K, we determine the reward r(s_t, a_t) to optimize π_E, accordingly.
The reward r(s_t, a_t) measures whether the current attribute (i.e., action) is a feasible fairness explanation responsible for the fairness change.
Finally, π_E is optimized with a counterfactual risk minimization (CRM) objective ∇_ΘR(π_E) to balance the distribution discrepancy from the logging policy π_0.
§.§ Graph Representation Module
Our graph representation module conducts heterogeneous graph representation learning to produce dense vectors of users, items, and attributes among the HIN.
Compared with homogeneous graph learning such as GraphSage <cit.>, our graph representation injects both node and edge heterogeneity to preserve the complex structure of the HIN.
In particular, we include two weight matrices to specify varying weights of different node and edge types.
In the following, we present the graph learning for user embedding 𝐡_u.
The embeddings of 𝐡_v, 𝐞_u and 𝐞_v can be obtained analogously by replacing nodes and node types while computations.
Specifically, we first use Multi-OneHot <cit.> to initialize node embeddings at the 0-th layer, in which u's embedding is denoted by 𝐡_u^0.
Then, at each layer l, user embedding 𝐡_u^l is given by aggregating node u's neighbor information w.r.t. different node and edge types:
𝐡_u^l=σ(concat [𝐖_ϕ(u)^lD_p[𝐡_u^l-1], 𝐖_ψ(e)^l/|𝒩_ψ(e)(u)|∑_u^'∈𝒩_ψ(e)(u)D_p[𝐡_u^'^l-1] ]+b^l)
where σ(·) is LeakyReLU <cit.> activation function and concat(·) is the concatenation operator.
D_p[·] is a random dropout with probability p applied to its argument vector.
𝐡_u^l-1 is u's embedding at layer l-1.
𝒩_ψ(e)(u)={u^'|(u, e, u^') ∈𝒢} is a set of nodes connected with user node u through edge type ψ(e).
The additionally dotted two weight matrices, i.e., node-type matrix 𝐖_ϕ(u)^l and edge-type matrix 𝐖_ψ(e)^l, are defined based on the importance of each type ϕ(u) and ψ(e).
b^l is an optional bias.
With Eq (<ref>), we obtain u's embedding 𝐡_u^l at each layer l ∈{1,⋯, L}.
We then adopt layer-aggregation <cit.> to concatenate u's embeddings at all layers into a single vector, i.e., 𝐡_u=𝐡_u^(1) + ⋯ + 𝐡_u^(L).
Finally, we have user node u's embedding 𝐡_u through aggregation.
The item embedding 𝐡_v, user attribute embedding 𝐞_u and item attribute embedding 𝐞_v can be calculated analogously.
§.§ Recommendation Model
The recommendation model f_R is initialized using user-item interaction matrix Y to produce the Top-K recommendation result H_u, K for all users.
Here, we employ a linear and simple matrix factorization (MF) <cit.> as the recommendation model f_R.
Particularly, MF initializes IDs of users and items as latent factors, and uses the inner product of user and item latent factors as the predictive function:
f_R(u,v)=U_u^⊤V_v
where U_u and V_v denote d-dimensional latent factors for user u and item v, respectively.
We use the cross-entropy <cit.> loss to define the objective function of the recommendation model:
ℒ_R = -∑_u, v, y_uv∈Y y_uvlog f_R(u,v)+(1-y_uv) log(1-f_R(u,v))
After optimizing the loss function ℒ_R, we can use the trained user and item latent factors (i.e., U, V) to produce the original Top-K recommendation lists H_u, K for all users u ∈𝒰.
§ REINFORCEMENT LEARNING FOR COUNTERFACTUAL FAIRNESS EXPLANATION
We put forward our counterfactual fairness explanation (CFE) model (cf. Figure <ref>), assisted by graph representation module and recommendation model, to generate explanation policy π_E for item exposure fairness.
The explanation policy π_E is optimized within off-policy learning to adaptively learn attributes responsible for fairness changes.
In the following, we first introduce off-policy learning for our CFE model.
Then we detail each key element in the off-policy learning and give unbiased policy optimization.
§.§ Explaining as Off-policy Learning
We cast our CFE model in an off-policy learning environment, which is formulated as Markov Decision Process (MDP).
The MDP is provided with a static logged dataset generated by a logging policy π_0 [We adopt the uniform-based logging policy as π_0. It samples attributes as actions from the attribute space with the probability of π_0(a_t | s_t)=1/|𝒱_U+𝒱_I|.].
The logging policy π_0 collects trajectories by uniformly sampling actions from the user and item attribute space.
We use the off-policy learning to optimize an explanation (i.e., target) policy π_E by approximating the counterfactual rewards of state-action pairs from all timestamps, wherein the logging policy π_0 is employed for exploration while the target policy π_E is utilized for decision-making.
In the off-policy setting,
the explanation policy π_E does not require following the original pace of the logging policy π_0.
As a result, π_E is able to explore the counterfactual region, i.e., those actions that haven't been taken by the previous agent using π_0.
Formally, at each timestamp t ∈{1,⋯,T} of MDP, the explanation policy π_E(a_t|s_t) selects an action (i.e., a candidate attribute) a_t ∈𝒜_t conditioning on the user state s_t ∈𝒮, and receives counterfactual reward r(s_t, a_t) ∈ℛ for this particular state-action pair.
Then the current state transits to the next state s_t+1 with transition probability of ℙ(s_t+1| s_t, a_t)∈𝒫.
The whole MDP has the key elements:
* 𝒮 is a finite set of states {s_t | t∈ [1,⋯, T]}. Each state s_t is transformed into dense vectors (i.e., embeddings) by our state representation learning (cf. Section <ref>).
* 𝒜_t is a finite set of actions (i.e., attributes) available at s_t. 𝒜_t is select from attributes 𝒱_t ∈𝒢 by our attentive action pruning (cf. Section <ref>) to reduce the search space.
* 𝒫: 𝒮×𝒜→𝒮 is the state transition, which absorbs transition probabilities of the current states to the next states.
Given action a_t at state s_t, the transition to the next state s_t+1 is determined as
ℙ(s_t+1| s_t, a_t)∈𝒫 =1.
* ℛ: 𝒮→ℛ is the counterfactual reward measures whether a deployed action (i.e., an attribute) is a valid counterfactual explanation for fairness. ℛ is used to guide the explanation policy learning and is defined in Section <ref>.
We now introduce the implementation of each key component.
§.§.§ State Representation Learning.
The state 𝒮 describes target users and their recommendation lists from the recommendation model.
Formally, at step t, the state s_t for a user u is defined as s_t=(u, H(u,K)), where u ∈𝒰 is a target user and H(u,K) is the recommendation produced by f_R.
The initial state s_0 is (u, v) and v is the interacted item of u, i.e., y_uv∈Y=1.
Our state representation learning maps user state s_t=(u, H(u,K)) into dense vectors for latter explanation policy learning.
Specifically, given s_t that absorbs current user u and its recommendation H(u,K)={v_1,v_2,...,v_K}.
We first acquire the embedding 𝐡_v_k of each item v_k ∈ H(u,K) from our graph representation module.
The state s_t then receives the concatenated item embeddings (i.e., concat[𝐡_v_k|∀ v_k ∈ H(u,K)]) to update its representation.
Considering states within 𝒮 have sequential patterns <cit.>,
we resort to Recurrent Neural Networks (RNN) with a gated recurrent unit (GRU) <cit.> to capture the sequential state trajectory.
We firstly initialize the state representation s_0 with an initial distribution s_0∼ρ_0
[In our experiment, we used a fixed initial state distribution, where s_0 = 0 ∈ℝ^d].
Then we learn state representation s_t through the recurrent cell:
𝐮_t =σ_g(𝐖_1concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_1 s_t-1+b_1)
𝐫_t =σ_g(𝐖_2 concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_2 s_t-1+b_2)
ŝ_t =σ_h(𝐖_3concat[𝐡_v_k|∀ v_k ∈ H(u,K)]+𝐔_3(𝐫_t· s_t-1)+b_3)
s_t =(1-𝐮_t) · s_t-1+𝐮_t⊙ŝ_t
where 𝐮_t and 𝐫_t denote the update gate and reset gate vector generated by GRU and ⊙ is the element-wise product operator.
𝐖_i, 𝐔_i are weight matrices and b_i is the bias vector.
Finally, s_t serves as the state representation at time step t.
§.§.§ Attentive Action Pruning.
Our attentive action pruning is designed to reduce the action search space by specifying the varying importance of actions for each state.
As a result, the sample efficiency can be largely increased by filtering out irrelevant actions to promote an efficient action search.
In our method, actions are defined as candidate attributes selected from a given HIN that potentially impact the model fairness.
In particular, given state s_t=(u, H(u,K)), we can distill a set of attributes 𝒱_t of the current user u and items v ∈ H(u,K) from the HIN.
Intuitively, we can directly use 𝒱_t as candidate actions for state s_t.
However, the user and item attribute amount of the HIN would be huge, resulting in a large search space that terribly degrades the learning efficiency <cit.>.
Thus, we propose an attentive action pruning based on attention mechanism <cit.> to select important candidate actions for each state.
Formally, given the embedding 𝐞_i for an attribute i ∈𝒱_t from Eq. (<ref>), and the state representation s_t from Eq. (<ref>), the attention score α_i of attribute i is:
α_i=ReLU(𝐖_s s_t+𝐖_h𝐞_i+b)
where 𝐖_s and 𝐖_h are two weight matrices and b is the bias vector.
We then normalize attentive scores of all attributes in 𝒱_t and select attributes with n-top attention scores into 𝒜_t:
𝒜_t={i | i ∈Top-n[exp(α_i)/∑_i^'∈𝒱_texp(α_i^')] and i ∈𝒱_t}
where n is the candidate size.
To the end, our candidate set 𝒜_t is of high sample efficiency since it filters out irrelevant attributes while dynamically adapting to the user state shift.
§.§.§ Counterfactual Reward Definition
The counterfactual reward r(s_t, a_t) ∈ℛ measures whether a deployed action a_t ∈𝒜_t is a valid counterfactual explanation for fairness at the current state s_t.
In particular, the reward is defined based on two criteria:
1) Rationality <cit.>: deploying action (i.e., attribute) a_t should cause the reduction of fairness disparity regarding the item exposure fairness.
The fairness disparity change is measured by the fairness disparity difference between the recommendation result before (i.e., Δ(H_u, K)) and after (i.e., Δ(H_u, K^cf)) fusing the action a_t to the recommendation model f_R, i.e., Δ(H_u, K)- Δ(H_u, K^cf).
2) Proximity <cit.>: a counterfactual explanation is a minimal set of attributes that changes the fairness disparity.
For the Rationality, we fuse the embedding of a_t with user or item latent factors from the recommendation model to learn updated user and item latent vectors, so as to get the Δ(H_u, K^cf).
Specifically, for a state s_t=(u, H(u,K)), the embedding 𝐞_t of action a_t is fused to user latent factor U_u for user u and item latent factors V_v_i for all items v_i ∈ H(u,K) by a element-wise product fusion.
As a result, we can get the updated latent factors U_u^cf and V_v^cf:
U_u^cf ←U_u⊙{𝐞_t|∀ t ∈ [1, ⋯, T]}, if a_t ∈𝒱_U
V_v_i^cf ←V_v_i⊙{𝐞_t|∀ t ∈ [1, ⋯, T]}, if a_t ∈𝒱_I
where ⊙ represents the element-wise product (a.k.a. Hadamard product).
T is the total training iteration.
At the initial state of t=0, user and item latent factors U_u and V_v are learned form Eq (<ref>).
Through Eq. (<ref>), the updated user and item latent vectors are then used to generate the intervened recommendation result H_u, K^cf.
For the Proximity, we compute whether a_t returns a minimal set of attributes that changes the recommendation model fairness.
This is equal to regulating user and item latent factors before (i.e., U_u, V_v) and after (i.e., U_u^cf, V_v^cf) fusing a_t be as similar as possible.
Based on the two criteria, the counterfactual reward can be defined as the following form:
r(s_t, a_t)={[ 1+dist (U_u, U_u^cf)+ dist (V_v, V_v^cf), if Δ(H_u, K)- Δ(H_u, K^cf) ≥ϵ; dist (U_u, U_u^cf)+ dist (V_v, V_v^cf), otherwise ].
where dist(·) is the distance metric defined as cosine similarity <cit.>, i.e., dist (a,b)=⟨ a, b⟩/a b.
Δ(·) is the fairness disparity evaluation metric defined in Eq.(<ref>).
ϵ is the disparity change threshold that controls the model flexibility.
§.§ Unbiased Policy Optimization
Using state s_t ∈𝒮 from Eq. (<ref>), candidate action a_t ∈𝒜_t from Eq. (<ref>), and counterfactual reward r(s_t, a_t) in Eq. (<ref>) for each timestamp t,
the policy optimization seeks the explanation policy π_E that maximizes the expected cumulative reward R(π_E) over total iteration T.
Intuitively, we can directly use the policy gradient calculated on R(π_E) to guide the optimization of π_E.
However, our policy optimization is conducted in the off-policy learning setting, in which π_E holds different distribution from the logging policy π_0.
Directly optimizing R(π_E) would result in a biased policy optimization <cit.> due to the policy distribution discrepancy.
To this end, we additionally apply Counterfactual Risk Minimization (CRM) <cit.> to correct the discrepancy between π_E and π_0.
In particular, CRM employs an Inverse Propensity Scoring (IPS) <cit.> to explicitly estimate the distribution shift between π_E and π_0.
After applying the CRM, we can alleviate the policy distribution bias by calculating the CRM-based expected cumulative reward R(π_E):
R(π_E)
= 𝔼_π_E[∑_t=0^Tγ^tπ_E(a_t| s_t)/π_0(a_t| s_t) r(s_t, a_t)]
where π_E(a_t |s_t)/π_0(a_t |s_t) is called the propensity score for balancing the empirical risk estimated from the π_0.
Finally, the policy gradient of the explanation policy learning w.r.t. model parameter Θ is achieved by the REINFORCE <cit.>:
∇_ΘR(π_E)=1/T∑_t=0^Tγ^tπ_E(a_t| s_t)/π_0(a_t| s_t) r(s_t, a_t) ∇_Θlogπ_E(a_t | s_t)
where T is the total training iteration.
By optimizing the Eq. (<ref>), the learned explanation policy π_E generates minimal sets of attributes responsible for item exposure fairness changes, so as to find the true reasons leading to unfair recommendations.
§ EXPERIMENTS
We conduct extensive experiments to evaluate the proposed CFairER for explaining item exposure fairness in recommendations.
We aim to particularly answer the following research questions:
* RQ1. Whether CFairER produces attribute-level explanations that are faithful to explaining recommendation model fairness compared with existing approaches?
* RQ2. Whether explanations provided by CFairER
achieve better fairness-accuracy trade-off than other methods?
* RQ3. Do different components (i.e., attentive action pruning, counterfactual risk minimization-based optimization) help CFairER to achieve better sample efficiency and bias alleviation? How do hyper-parameters impact CFairER?
§.§ Experimental Setup
§.§.§ Datasets
We use logged user behavior data from three datasets [https://www.yelp.com/dataset/], [https://movie.douban.com/] and [https://github.com/librahu/HIN-Datasets-for-Recommendation-and-Network-Embedding] for evaluations.
Each dataset is considered as an independent benchmark for different tasks, i.e., business, movie and music recommendation tasks.
The dataset records user ratings on local businesses and business compliment, category and city profiles.
The is a movie recommendation dataset that contains user group information and movie actor, director and type details.
The contains music listening records of users and artist tags.
The details of both datasets are given in Table <ref>, which depicts statistics of user-item interactions, user-attribute and item-attribute relations.
All datasets constitute complex user-item interactions and diverse attributes, thus providing rich contextual information for fairness explanation learning.
Following previous works <cit.>, we adopt a 10-core setting, i.e., retaining users and items with at least ten interactions for both datasets to ensure the dataset quality.
Meanwhile, we binarize the explicit rating data by interpreting ratings of 4 or higher as positive feedback, otherwise negative.
Then, we sort the interacted items for each user based on the timestamp and split the chronological interaction list into train/test/valid sets with a proportion of 60%/20%/20%.
We also study the long-tail distribution of user-item interactions in the three datasets.
We present the visualization results of the distribution of historical user-item interactions in the three datasets in Figure <ref>.
Analyzing Figure <ref>, we find that user-item interactions of both datasets are presented with a skewed distribution: the head-tailed distribution in the blue plot area and the long-tailed distribution in the yellow plot area.
Besides, a small fraction of popular items account for most of the user interactions in both datasets,
The skewed distribution would result in serious item exposure unfairness issues in recommendations, such as the well-known filter-bubble problem <cit.> and Matthew effect <cit.>.
§.§.§ Baselines
We adopt three heuristic approaches and two existing fairness-aware explainable recommendation methods as baselines.
In particular,
* RDExp: We randomly select attributes from the attribute space for each user-item interaction and generate explanations based on the selected attributes. Note that the selected attributes can be both user and item attributes.
* PopUser and PopItem: We separately calculate the exposure number of attributes for each user-item interaction, then sort each attribute chronologically based on the exposure number.
We devise a baseline PopUser, in which the top user attributes are selected as explanations. Analogously, we build PopItem that produces the top item attributes for the explanation.
* FairKGAT: uses FairKG4Rec <cit.> to mitigate the unfairness of explanations for a knowledge graph-enhanced recommender KGAT <cit.>.
FairKG4Rec <cit.> is a generalized fairness-aware algorithm that controls the unfairness of explanation diversity in the recommendation model.
KGAT <cit.> is a state-of-the-art knowledge graph-enhanced recommendation model that gives the best fairness performance in the original FairKG4Rec paper.
* CEF <cit.>: is the first work that explains fairness in recommendation.
It generates feature-based explanations for item exposure unfairness by perturbing user and item features and searches for features that change the fairness disparity.
Note that to the best of our knowledge, FairKGAT <cit.> and CEF <cit.> are the only two existing methods designed for explainable fairness recommendation tasks.
§.§.§ Explanation Faithfulness Evaluation
We adopt the widely used erasure-based evaluation criterion <cit.> in Explainable AI to evaluate the explanation faithfulness.
The erasure-based evaluation identifies the contributions of explanations by measuring model performance changes after these explanations are removed.
As a result, one can tell whether the model actually relied on these particular explanations to make a prediction, i.e., faithful to the model.
In our experiments, we use the erasure-based evaluation to test (I) the recommendation performance change and (II) the recommendation fairness change after a set of attributes from the generated explanation is removed.
By doing so, we can identify whether our explanations are faithful to recommendation performance and fairness disparity.
Following <cit.>, we remove certain attributes from the generated explanations and evaluate the resulting recommendation performance.
Therefore, in the starting evaluation point, we consider all attributes and add them to the user and item embeddings.
We then remove certain attributes from the generated explanations to observe recommendation and fairness changes at later evaluation points.
In particular,
we first use historical user-item interactions to train a recommendation model through Eq. (<ref>) to generate user and item embeddings.
Then, we fuse all attribute embeddings from Eq. (<ref>) with the trained user and item embeddings.
The user and item embeddings after fusion are used to generate recommendation results at the starting evaluation point.
Thereafter, we conduct counterfactual reasoning using our CFairER to generate attribute-level counterfactual explanations for model fairness.
Those generated explanations are defined as the erasure set of attributes for each user/item.
Finally, we exclude the erasure set from attribute space, and fuse the embeddings of attributes after erasure with the trained user and item embeddings to generate new recommendation results.
Given the recommendation results at each evaluation point, we use Normalized Discounted Cumulative Gain (NDCG)@K and Hit Ratio (HR)@K to measure the recommendation performance
As this work focuses on item exposure fairness in recommendations, we use two wildly-adopted item-side evaluation metrics, i.e., Head-tailed Rate (HT)@K and Gini@K, for fairness evaluation.
HT@K refers to the ratio of the head-tailed item number to the list length K.
Later HT@K indicates that the model suffers from a more severe item exposure disparity by favoring items from the head-tailed (i.e., popular) group.
Gini@K measures inequality within subgroups among the Top-K recommendation list.
Larger Gini@K indicates the recommendation results are of higher inequality between the head-tailed and the long-tailed group.
§.§.§ Implementation Details
To demonstrate our CFairER, we employ a simple matrix factorization (MF) as our recommendation model.
We train the MF using train/test/validate sets split from user-item interactions in datasets with 60%/20%/20%.
We optimize the MF using stochastic gradient descent (SGD) <cit.>.
The same data splitting and gradient descent methods are applied in all baselines when required.
Our graph representation module employs two graph convolutional layers with {64, 128} output dimensions.
FairKGAT baseline also keep 2 layers.
The graph representation module outputs embeddings for all user and item attributes with the embedding size d=128.
The embedding size for FairKGAT and CEF is also fixed as d=128.
The number of latent factors (as in Eq. (<ref>)) of MF is set equal to the embedding size of our graph representation module.
To generate the starting evaluation point of erasure-based evaluation, we fuse attribute embeddings with the trained user and item latent factors based on element-wise product fusion.
The fused user and item embeddings are then used to produce Top-K recommendation lists.
We train our counterfactual fairness explanation model with SGD based on the REINFORCE <cit.> policy gradient.
For baseline model compatibility, as CEF <cit.> requires pre-defined user-feature attention matrix and item-feature quality matrix, we follow previous work <cit.> to regulate user/item attributes as user/item aspects and resort to analysis toolkit “Sentires” [https://github.com/evison/Sentires] to build the two matrices.
The hyper-parameters of our CFairER and all baselines are chosen by the grid search, including learning rate, L_2 norm regularization, discount factor γ, etc.
The disparity change threshold ϵ in Eq. (<ref>) of our CFairER is determined by performing a grid search on the validation set.
This enables us to choose the optimal value for a variety of recommendation tasks, including but not limited to business ( dataset), movie ( dataset), and music ( dataset) recommendations.
After all models have been trained, we freeze the model parameters and generate explanations accordingly.
We report the erasure-based evaluation results by recursively erasing top E attributes from the generated explanations.
The erasure length E is chosen from E=[5, 10, 15, 20].
The recommendation and fairness performance of our CFairER and baselines under different E is reported in Table <ref>.
§.§ Explanation Faithfulness (RQ1, RQ2)
We plot fairness and recommendation performance changes of our CFairER and baselines while erasing attributes from explanations in Figure <ref>.
Each data point in Figure <ref> is generated by cumulatively erasing a batch of attributes.
Those erased attributes are selected from the top 10 (i.e., E=10) attribute sets of the explanation lists provided by each method.[For example, given n explanation lists, the number of erasure attributes is n × 10. We cumulatively erase m attributes in one batch within in total (n × 10) / m iterations.]
As PopUser and PopItem baselines enjoy very similar data trends, we choose not to present them simultaneously in Figure <ref>.
Table <ref> presents recommendation and fairness performance after erasing E = [5, 10, 20] attributes in explanations.
Larger NDCG@K and Hit Ratio @K values indicate better recommendation performance while smaller Head-tailed Rate@K and Gini@K values represent better fairness.
Analyzing Figure <ref> and Table <ref>, we have the following findings.
Amongst all methods, our CFairER achieves the best recommendation and fairness performance after erasing attributes from our explanations on all datasets.
For instance, CFairER beats the strongest baseline CEF by 25.9%, 24.4%, 8.3% and 36.0% for NDCG@40, Hit Ratio@40, Head-tailed Rate@40 and Gini@40 with erasure length E=20 on .
This indicates that explanations generated by CFairER are faithful to explaining unfair factors while not harming recommendation accuracy.
Unlike CEF and FairKGAT, which generate explanations based on perturbing input features and adding fair-related constraints, CFairER generates counterfactual explanations by inferring minimal attributes contributing to fairness changes.
As a counterfactual explanation is minimal, it only discovers attributes that well-explain the model fairness while filtering out tedious ones that affect the recommendation accuracy.
Another interesting finding is that
PopUser and PopItem perform even worse than RDExp (i.e., randomly selecting attributes) on .
This is because recommending items with popular attributes would deprive the exposure of less-noticeable items, causing serious model unfairness and degraded recommendation performance.
In general, the fairness of all models consistently improves while erasing attributes from explanations, shown by the decreasing trend of Head-tailed Rate@K values in Figure <ref>.
This is because erasing attributes will alleviate the discrimination against users and items from disadvantaged groups (e.g., gender group, brand group), making more under-represented items to be recommended.
Unfortunately,
we can also observe the downgraded recommendation performance of all models in both Figure <ref> and Table <ref>.
For example, in Figure <ref>, the NDCG@5 of CEF drops from approximately 1.17 to 0.60 on at erasure iteration 0 and 50.
This is due to the well-known fairness-accuracy trade-off issue, in which the fairness constraint could be achieved with a sacrifice of recommendation performance.
Facing this issue, both baselines suffer from huge declines in recommendation performance, as in Table <ref>.
On the contrary, our CFairER still enjoys favorable recommendation performance and outperforms all baselines.
Besides, the decline rates of our CFairER are much slower than baselines on both datasets in Figure <ref>.
We hence conclude that the attribute-level explanations provided by our CFairER can achieve a much better fairness-accuracy trade-off than other methods.
This is because our CFairER uses counterfactual reasoning to generate minimal but vital attributes as explanations for model fairness.
Those attributes produced by CFairER are true reasons for unfairness but not the ones that affect the recommendation accuracy.
§.§ Ablation and Parameter Analysis (RQ3)
We first conduct an in-depth ablation study on the ability of our CFairER to achieve sample efficiency and bias alleviation.
Our CFairER includes two contributing components,
namely, attentive action pruning (cf. Section <ref>) and counterfactual risk minimization-based optimization (cf. Section <ref>).
We evaluate our CFairER with different variant combinations and show our main findings below.
§.§.§ Sample Efficiency of Attentive Action Pruning
Our attentive action pruning reduces the action search space by specifying varying importance of attributes for each state.
As a result, the sample efficiency can be increased by filtering out irrelevant attributes to promote an efficient action search.
To demonstrate our attentive action pruning, we test CFairER without () the attentive action pruning (i.e., CFairER Attentive Action Pruning), in which the candidate actions set absorbs all attributes connected with the current user and items.
Through Table <ref>, we observed that removing the attentive action pruning downgrades CFairER performance, which validates the superiority of our attentive action pruning in improving fair recommendations.
This is because attentive action pruning filters out irrelevant items based on their contributions to the current state, resulting in enhanced sample efficiency.
Moreover, the performance of CFairER after removing the attentive action pruning downgrades severely on .
This is because has the largest number of attributes compared with the other two datasets (cf. Table <ref>), which challenges our CFairER to find suitable attributes as fairness explanations.
These findings suggest the superiority of applying attentive action pruning in fairness explanation learning, especially when the attribute size is large.
§.§.§ Bias Alleviation with Counterfactual Risk Minimization
Our CFairER is optimized with a counterfactual risk minimization (CRM) loss to achieve unbiased policy optimization.
The CRM loss (cf. Eq. (<ref>)) corrects the discrepancy between the explanation policy and logging policy, thus alleviating the policy distribution bias in the off-policy learning setting.
To demonstrate the CRM loss,
we apply our CFairER with cross-entropy (CE) <cit.> loss (i.e., CRM loss → Cross-entropy loss) to show how it performs compared with CFairER on the CRM loss.
We observe our CFairER with CRM loss consistently outperforms the counterpart with CE loss on both fairness and recommendation performance.
The sub-optimal performance of our CFairER with CE loss indicates that the bias issue in the off-policy learning can lead to downgraded performance for the learning agent.
On the contrary, our CFairER takes advantage of CRM to learn a high-quality explanation policy.
We hence conclude that performing unbiased optimization with CRM is critical to achieving favorable fairness explanation learning.
§.§.§ Parameter Analysis
We also conduct a parameter analysis on how erasure length E (cf. Section <ref>) and candidate size n (as in Eq. (<ref>)) impact CFairER.
Figure <ref> (a) and Figure <ref> (b) report CFairER performance w.r.t. E=[5, 10, 15, 20].
Apparently, the performance of CFairER demonstrates decreasing trends from E=5, then becomes stable after E=10.
The decreased performance is due to the increasing erasure of attributes found by our generated explanations.
This indicates that our CFairER can find valid attribute-level explanations that impact fair recommendations.
The performance of CFairER degrades slightly after the bottom, then becomes stable.
This is reasonable since the attributes number provided in datasets are limited, while increasing the erasure length would allow more overlapping attributes with previous erasures to be found.
By varying candidate size n from n=[10, 20, 30, 40, 50, 60] in Figure <ref> (c) (d),
we observe that CFairER performance first improves drastically as candidate size increases on both datasets.
The performance of our CFairER reaches peaks at n=40 and n=30 on and , respectively.
After the peaks, we can witness a downgraded model performance by increasing the candidate size further.
We consider the poorer performance of CFairER before reaching peaks is due to the limited candidate pool, i.e., insufficient attributes limit the exploration ability of CFairER to find appropriate candidates as fairness explanations.
Meanwhile, a too-large candidate pool (e.g., n=60) would offer more chances for the agent to select inadequate attributes as explanations.
Based on the two findings, we believe it is necessary for our CFairER to carry the attentive action search, such as to select high-quality attributes as candidates based on their contributions to the current state.
§.§.§ Time Complexity and Computation Costs
For time complexity, our recommendation model (cf. Section <ref>) performs matrix factorization with a complexity of O(|𝒪|).
For the graph representation module (cf. Section <ref>), establishing node representations has complexity O(∑_l=1^L (|𝒢|+|𝒪^+|) d_l d_l-1).
For the off-policy learning process (cf. Section <ref>), the complexity is mainly determined by the attention score calculation, which has a time complexity of O(2T|𝒪^+| |𝒩̃_e| d^2).
The total time complexity is O(|𝒪|+ ∑_l=1^L(|𝒢|+|𝒪^+|) d_l d_l-1+2T|𝒪^+| n_2 d^2).
We evaluated the running time of FairKGAT and CEF baselines on the large-scale dataset.
The corresponding results are 232s and 379s per epoch, respectively.
CFairER has a comparable cost of 284s per epoch to these baselines. Considering that our CFairER achieves superior explainability improvements compared to the baselines, we believe that the increased cost of, at most, 52s per epoch is a reasonable trade-off.
§ CONCLUSION
We propose CFairER, a reinforcement learning-based fairness explanation learning framework over a HIN.
Our CFairER generates counterfactual explanations as minimal sets of real-world attributes to explain item exposure fairness.
We design a counterfactual fairness explanation model to discover high-quality counterfactual explanations, driven by an attentive action pruning to reduce the search space and a counterfactual reward to enable counterfactual reasoning.
Extensive evaluations on three benchmark datasets demonstrate CFairER’s ability to find faithful explanations for fairness and balance the fairness-accuracy trade-off.
This work is supported by the Australian Research Council (ARC) under Grant No. DP220103717, LE220100078, LP170100891 and DP200101374.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.05544v1 | 20230708211940 | Coupling high-overtone bulk acoustic wave resonators via superconducting qubits | [
"Wayne Crump",
"Alpo Välimaa",
"Mika A. Sillanpää"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
In this work, we present a device consisting of two coupled transmon qubits, each of which are coupled to an independent high-overtone bulk acoustic wave resonator (HBAR). Both HBAR resonators support a plethora of acoustic modes, which can couple to the qubit near resonantly. We first show qubit-qubit interaction in the multimode system, and finally quantum state transfer where an excitation is swapped from an HBAR mode of one qubit, to an HBAR mode of the other qubit.
Coupling high-overtone bulk acoustic wave resonators via superconducting qubits
Mika A. Sillanpää
===============================================================================
Hybrid quantum systems seek to combine strengths and offset weaknesses of different quantum technologies in order to improve capability beyond that of any one technology. Superconducting circuits are one of the more mature quantum technologies at this stage and have been integrated with many other systems due to the relative ease in design and fabrication as well as good coherence times <cit.>.
Many different acoustic systems have been integrated with superconducting circuits such as nanomechanical oscillators <cit.>, phononic crystals <cit.>, bulk acoustic wave systems <cit.> and surface acoustic wave systems <cit.>. Acoustic resonators can offer great coherence properties <cit.> as well as smaller mode volumes due to the relation between wave velocity and wavelength, with the difficulty coming in coupling these resonators strongly with electromagnetic systems.
The strong coupling of acoustic modes with superconducting qubits has resulted in many experiments exploring the quantum nature of mechanical oscillations, with experiments demonstrating number splitting <cit.>, the creation of non-classical states in the acoustic mode <cit.>, Landau-Zener-Stückelberg interference <cit.>, and entanglement <cit.>. The ability to prepare acoustic resonators in arbitrary quantum states opens up the possibility of using them in applications such as quantum memories due to their coherence properties and insensitivity to electromagnetic noise.
High-overtone bulk acoustic wave resonators (HBAR) offer access to mechanical modes in the GHz regime, making them attractive for integration with superconducting qubits. The piezoelectric interaction enables coupling in the strong regime and their state to be controlled and read-out using the qubit. The system has been implemented using a 3D <cit.> and 2D <cit.> transmon architecture with part or all of the qubit capacitor directly patterned on the piezo layer of the HBAR. This was later improved in both cases by using a flip-chip design <cit.> which has lead to the current state of the art <cit.>. Experiments on these system have demonstrated the creation of non-classical multiphonon states <cit.>, demonstration of dispersive readout for a parity measurement of the mechanical mode <cit.>, and sideband control of the mechanical modes <cit.>.
Work thus far has focused on coupling of a qubit and a single HBAR device supporting a set of acoustic modes. In this work we couple two complete qubit-HBAR systems together via qubit-qubit interaction, and transfer excitations within the system, including between the HBAR modes. This demonstrates the possibility of integrating multiple HBAR devices into quantum circuits enabling the exploration of much larger and complex systems.
In the system there are two qubits which are coupled together as well as being individually coupled to a set of HBAR modes. The qubit-mode couplings can be described by the Jaynes-Cummings model, and the qubit-qubit coupling will be capacitive and therefore expected to take the iSWAP form
<cit.>. The system as a whole can then be described by the Hamiltonian:
H/ħ = ω_1/2σ_(z,1) + ω_2/2σ_(z,2) + J (σ_(+,1)σ_(-,2) + σ_(-,1)σ_(+,2))
+ ∑_m [ ω_(m,1)( a_(m,1)^† a_(m,1) + 1/2) .
. + g_(m,1)(a_(m,1)^†σ_(-,1) + a_(m,1)σ_(+,1))]
+ ∑_n [ ω_(n,2)( a_(n,2)^† a_(n,2) + 1/2) .
. + g_(n,2)(a_(n,2)^†σ_(-,1) + a_(n,2)σ_(+,1))] ,
where ω_1 and ω_2 are the qubit frequencies, J is the qubit-qubit coupling, ω_(m,1) and ω_(n,2) are the HBAR mode frequencies corresponding to their respective qubits and g_(m,1), g_(n,2) are the couplings to the HBAR modes. The σ_i,j are the pauli operators and a_m,a_m^† are the annihilation and creation operators.
In order to theoretically analyze the experiments described below, we determine the time evolution of the system using the Lindblad master equation. We include the qubits' decay and dephasing, as well as mechanical mode decay.
Figure <ref> shows an optical image of the device used for the experiments.
The device consists of a superconducting circuit with two qubits, each with their own readout, flux bias control and excitation lines. The qubits have a capacitive coupling to each other, as well as to the HBAR flip chip that covers both. The qubits have a round pad on the bottom arm of around 80 μm in diameter which defines the capacitive coupling to the HBAR chip. The circuit was patterned using Electron beam lithography and metalised with evaporated Aluminium. Double angle evaporation was used to create the Josephson junctions for the qubits.
The HBAR flip chip consists of a 900 nm AlN piezo layer, a 250 μm sapphire layer and a 60 nm Mo layer in-between to act as a ground plane to enhance the coupling to the mechanical modes <cit.>. The HBAR was placed by hand onto the circuit chip and glued with standard epoxy.
The qubit frequencies can be tuned in the range 3.7-4.5 GHz and have readout resonator frequencies of 6.230 GHz and 6.013 GHz. The operating points of the qubits were chosen to maximise their coherence properties and hence they are operating at or close to their minimum frequencies, as shown in fig:overview.
The bottom two plots of figure <ref> show two-tone measurements sweeping the qubit frequencies in the neighbourhood of their operating frequencies chosen for later experiments. The operating frequency of qubit 1 was set near its minimum at ω_1,OP/2π = 3.7778 GHz and qubit 2 at its minimum at ω_2,OP/2π = 3.6673 GHz. The many small anticrossings occur when a qubit is sweeping past an HBAR mode, while the larger anticrossing at 3.778 GHz seen in the data for qubit 2 corresponds to the qubit-qubit coupling. The spacing between HBAR modes (free spectral range, FSR) is around 22 MHz which corresponds well with the thickness of the HBAR sapphire layer. The dashed lines show the eigenvalues according to equation <ref>.
At the qubits respective operating points, they had T_1 values of 2.2 μs and 2.41 μs, as well as T_2 values of 4.41 μs and 1.02 μs. Their respective 2g couplings to their HBAR modes were 2.55 MHz and 2.85 MHz, with the mechanical T_1 values being 380 ns and 320 ns. The system had a qubit-qubit 2g coupling of 16.7 MHz.
Figure <ref> shows a vacuum Rabi oscillation experiment where an excitation is swapping between an initially excited qubit and its coupled mechanical modes. In panels (a,b) qubit 2 is being controlled and measured and we see vacuum Rabi oscillations with the mechanical modes (red arrows) and also with the other qubit (blue arrows), corresponding with the anticrossings seen in figure <ref> bottom right. In figure <ref> (c,d) qubit 1 is controlled and experiences vacuum Rabi oscillations with its coupled mechanical modes following the anticrossings seen in figure <ref> bottom left. Since the flux is tuned in the positive direction, it first sweeps on resonance with the lower mode and then with the upper mode seen in figure <ref> bottom right.
If one looks closely the vacuum Rabi oscillation fringes can be seen to be asymmetric, especially in figure <ref> (a). The source of this unknown and it results in deviations from the theory at later simulation times. Some slight asymmetry could be generated for the nearest mode by including the effect of the π pulse specifically in the simulations, but this was not enough to reproduce the long tail of the fringes from the mode nearest the qubit operation point seen in figure <ref> (a) which extend very far, up to where qubit 1 is. It can also be seen in figure <ref> (a) that the vacuum Rabis with qubit 1 also show these extended fringes on the right side. This behaviour may be related to the same phenomena that is seen in frequency domain, where at the avoided crossing, the upper branch has less weight than the lower branch. It is possible at least some of the asymmetry is caused by pulse distortion <cit.>.
The line cuts in figure <ref> (b) show a double oscillation feature that occurs when qubit 2 is near the qubit 1 frequency. This is because the excitation is experiencing Rabi oscillations with both the other qubit and the nearby acoustic modes at the same time but on different time scales, hence the multiple oscillating feature.
It is important to determine whether or not the qubits couple to the same set of acoustic modes. The issue is nontrivial since on one hand, the qubits are in close proximity to each other and share the same HBAR chip, which would point to delocalized acoustic modes. On the other hand, one could argue that the electric field of either qubit should confine the HBAR mode only below its own electrode. We attempted to carry out finite-element simulations, however, full 3-dimensional solution was beyond reach. In 2 dimensions and with a limited overtone number, we saw indications of a delocalized acoustic mode, with the study showing that moving the qubit coupling pad changed the strength of coupling to modes of different lateral profile. Experimentally, the issue cannot immediately be resolved in spectroscopy, since the HBAR spectral lines in figure <ref> are equal within measurement uncertainties, which however is expected based on the geometry. A time-domain experiment was done to confirm that the qubits couple to their individual sets of acoustic modes. This was done by swapping an excitation from qubit 1 to its acoustic mode at 3.788 GHz, and then tuning it away whilst tuning the qubit 2 on resonance with this mode. The experiment found no response and so concluded that the qubits indeed couple to separate modes with any stray coupling being too weak to observe.
Finally, we demonstrate the swapping of an excitation through the degrees of freedom of the system. Figure <ref> shows the pulse sequence and measured data. The excitation swaps from the 3.7885 GHz HBAR mode coupled to qubit 1 all the way to various HBAR modes coupled to qubit 2. The resulting measurement data is similar to figure <ref> (a) as the last part of the pulse sequence is similar to that experiment, however this excitation has travelled from an acoustic mode coupled to the opposite qubit hence why the initial excited state population is reduced due to decoherence.
Now that we have shown the ability to transfer excitations around the system, we would in principle be able to create an entangled state between arbitrary acoustic modes. However, due to the limited coherence of the system, we were not able to measure this in practice. One needs to measure the entangled modes simultaneously under a series of tomography pulses in order to produce the density matrix of the system (for example see <cit.>). This was not straightforward to do in our system as the acoustic modes are coupled to different qubits, meaning we need to readout the acoustic mode in single-shot to be able to correlate the results. We are limited both by our single-shot readout fidelity <60%, and by not being in the strong dispersive regime which requires acoustic T_1 times of 8 μs at our coupling magnitudes.
A possible simplification to make is to only measure an entangled state which does not occupy number states higher than |1⟩ so that in this case one can swap the state back to the qubits and measure them. Due to the low readout fidelity, we have to use an ensemble measurement. There is a tomography pulse scheme to measure the two qubit density matrix using an ensemble measurement <cit.>. This requires an appropriate two-qubit gate as a part of the tomography pulse scheme and in our case this would be an iSWAP pulse. The calibration of this iSWAP pulse was problematic having a fidelity of 55% which was not sufficient to do the two qubit tomography. We estimate that probably higher than 70% gate fidelity is required to be able to perform the measurement.
In order to improve the fidelity of single and two-qubit gates in the system, one would like the FSR to be larger than the coupling by a factor of at least 20. This is so that if the qubit is in between two modes, it will only interact dispersively. Also the FSR should be larger than inverse pulse widths, so that these are not exciting nearby mechanical modes as well. Longer coherence times for both the qubits and the acoustics are important towards this end. The ideal solution would be the development of a tunable coupler, to be able to selectively couple to modes of interest, which is important for using HBARs in quantum information processing.
In conclusion we have fabricated and measured a sample consisting of two qubits each coupled to an individual set of high overtone bulk acoustic (HBAR) modes as well as to each other. An excitation was swapped from an HBAR mode coupled with one qubit, to an HBAR mode coupled to the other qubit. This demonstrates the possibility to integrate multiple HBAR devices into a superconducting circuit, where complex quantum states could be stored across these devices.
We would like to thank Mikael Kervinen for useful discussion. We acknowledge the facilities and technical support of Otaniemi research infrastructure for Micro and Nanotechnologies (OtaNano) that is part of the European Microkelvin Platform. This work was supported by the Academy of Finland (contracts 307757), by the European Research Council (101019712), and by the Wihuri Foundation. We acknowledge funding from the European Union's Horizon 2020 research and innovation program under the QuantERA II Programme (13352189). The work was performed as part of the Academy of Finland Centre of Excellence program (project 336810).
20
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Clerk et al.(2020)Clerk,
Lehnert, Bertet, Petta, and Nakamura]Clerk2020hybrid
author author A. A. Clerk, author K. W. Lehnert,
author P. Bertet, author J. R. Petta, and author Y. Nakamura, title
title Hybrid quantum systems with circuit quantum
electrodynamics, @noop journal journal
Nature Physics volume 16, pages
257–267 (year 2020)NoStop
[Regal et al.(2008)Regal,
Teufel, and Lehnert]Lehnert2008Nph
author author C. A. Regal, author J. D. Teufel, and author K. W. Lehnert, title title Measuring nanomechanical
motion with a microwave cavity interferometer, @noop journal journal Nature Physics volume
4, pages 555–560 (year 2008)NoStop
[Teufel et al.(2011)Teufel,
Li, Allman, Cicak,
Sirois, Whittaker, and Simmonds]Teufel2011a
author author J. D. Teufel, author Dale Li,
author M. S. Allman, author K. Cicak, author
A. J. Sirois, author
J. D. Whittaker, and author
R. W. Simmonds, title
title Circuit cavity electromechanics in the
strong-coupling regime, @noop journal journal Nature volume 471, pages
204–208 (year 2011)NoStop
[O'Connell et al.(2010)O'Connell, Hofheinz, Ansmann, Bialczak, Lenander, Lucero, Neeley, Sank, Wang, Weides,
Wenner, Martinis, and Cleland]OConnell2010
author author A. D. O'Connell, author M. Hofheinz,
author M. Ansmann, author Radoslaw C. Bialczak, author M. Lenander, author
Erik Lucero, author
M. Neeley, author D. Sank, author H. Wang, author M. Weides,
author J. Wenner, author John M. Martinis, and author A. N. Cleland, title title Quantum ground state and single-phonon
control of a mechanical resonator, @noop journal
journal Nature volume 464, pages 697–703 (year 2010)NoStop
[Arrangoiz-Arriola et al.(2019)Arrangoiz-Arriola, Wollack, Wang,
Pechal, Jiang, McKenna,
Witmer, Van Laer, and Safavi-Naeini]Safavi2019Fock
author author Patricio Arrangoiz-Arriola, author E. Alex Wollack, author Zhaoyou Wang, author Marek Pechal, author Wentao Jiang, author Timothy P. McKenna, author Jeremy D. Witmer, author Raphaël Van Laer, and author Amir H. Safavi-Naeini, title title Resolving the energy levels of a nanomechanical oscillator, @noop journal journal Nature volume 571, pages 537–540 (year
2019)NoStop
[Chu et al.(2017)Chu,
Kharel, Renninger, Burkhart,
Frunzio, Rakich, and Schoelkopf]SchoelkopfHBAR2017
author author Yiwen Chu, author Prashanta Kharel,
author William H. Renninger,
author Luke D. Burkhart,
author Luigi Frunzio, author Peter T. Rakich, and author Robert J. Schoelkopf, title title Quantum acoustics with superconducting
qubits, @noop journal journal
Science volume 358, pages 199–202
(year 2017)NoStop
[Kervinen et al.(2018)Kervinen, Rissanen, and Sillanpää]kervinen_interfacing_2018
author author Mikael Kervinen, author Ilkka Rissanen, and author Mika Sillanpää, title title
Interfacing planar superconducting qubits with high overtone bulk acoustic
phonons, @noop journal journal
Physical Review B volume 97, pages
205443 (year 2018)NoStop
[Gustafsson et al.(2014)Gustafsson, Aref, Kockum, Ekström, Johansson, and Delsing]Delsing2014
author author Martin V. Gustafsson, author Thomas Aref, author Anton Frisk Kockum, author Maria K. Ekström, author Göran Johansson, and author Per Delsing, title title
Propagating phonons coupled to an artificial atom, @noop
journal journal Science volume 346, pages 207–211 (year
2014)NoStop
[Noguchi et al.(2017)Noguchi, Yamazaki, Tabuchi, and Nakamura]Nakamura2017
author author Atsushi Noguchi, author Rekishu Yamazaki, author Yutaka Tabuchi, and author Yasunobu Nakamura, title title Qubit-assisted
transduction for a detection of surface acoustic waves near the quantum
limit, @noop journal journal Phys.
Rev. Lett. volume 119, pages 180505
(year 2017)NoStop
[Moores et al.(2018)Moores,
Sletten, Viennot, and Lehnert]moores_cavity_2018
author author Bradley A. Moores, author Lucas R. Sletten, author Jeremie J. Viennot, and author K. W. Lehnert, title title
Cavity Quantum Acoustic Device in the Multimode Strong Coupling
Regime, @noop journal journal
Physical Review Letters volume 120, pages 227701 (year 2018)NoStop
[Bienfait et al.(2019)Bienfait, Satzinger, Zhong, Chang, Chou, Conner, Dumur,
Grebel, Peairs, Povey, and Cleland]Cleland2019PhEntangl
author author A. Bienfait, author K. J. Satzinger, author Y. P. Zhong, author H.-S. Chang,
author M.-H. Chou, author C. R. Conner, author
É. Dumur, author
J. Grebel, author G. A. Peairs, author R. G. Povey, and author A. N. Cleland, title title
Phonon-mediated quantum state transfer and remote qubit entanglement, @noop journal journal Science volume 364, pages 368–371 (year
2019)NoStop
[Gokhale et al.(2020)Gokhale, Downey, Katzer, Nepal, Lang, Stroud, and Meyer]gokhale_epitaxial_2020
author author Vikrant J. Gokhale, author Brian P. Downey, author D. Scott Katzer, author Neeraj Nepal, author Andrew C. Lang, author Rhonda M. Stroud, and author David J. Meyer, title title
Epitaxial bulk acoustic wave resonators as highly coherent multi-phonon
sources for quantum acoustodynamics, @noop journal
journal Nature Communications volume
11, pages 2314 (year 2020)NoStop
[Chu et al.(2018)Chu,
Kharel, Yoon, Frunzio,
Rakich, and Schoelkopf]chu_creation_2018
author author Yiwen Chu, author Prashanta Kharel,
author Taekwan Yoon, author Luigi Frunzio, author
Peter T. Rakich, and author
Robert J. Schoelkopf, title
title Creation and control of multi-phonon Fock
states in a bulk acoustic-wave resonator, @noop journal journal Nature volume 563, pages 666–670 (year 2018)NoStop
[Kervinen et al.(2019)Kervinen, Ramírez-Muñoz, Välimaa, and Sillanpää]kervinen2019landau
author author Mikael Kervinen, author Jhon E. Ramírez-Muñoz, author Alpo Välimaa, and author Mika A. Sillanpää, title title Landau-Zener-Stückelberg Interference in a Multimode
Electromechanical System in the Quantum Regime, @noop journal journal Phys. Rev. Lett. volume 123, pages 240401 (year
2019)NoStop
[Wollack et al.(2022)Wollack, Cleland, Gruenke, Wang, Arrangoiz-Arriola, and Safavi-Naeini]Wollack2022entangle
author author E. Alex Wollack, author Agnetta Y. Cleland, author Rachel G. Gruenke, author Zhaoyou Wang,
author Patricio Arrangoiz-Arriola, and author Amir H. Safavi-Naeini, title title Quantum state preparation and tomography of entangled mechanical
resonators, @noop journal journal
Nature volume 604, pages 463–467
(year 2022)NoStop
[Kervinen et al.(2020)Kervinen, Välimaa, Ramírez-Muñoz, and Sillanpää]Kervinen2020
author author Mikael Kervinen, author Alpo Välimaa, author Jhon E. Ramírez-Muñoz, and author Mika A. Sillanpää, title title Sideband control of a multimode quantum bulk acoustic system, @noop journal journal Phys. Rev.
Applied volume 14, pages 054023
(year 2020)NoStop
[von Lüpke et al.(2022)von Lüpke, Yang, Bild, Michaud, Fadel, and Chu]Lupke2022
author author Uwe von Lüpke, author Yu Yang,
author Marius Bild, author Laurent Michaud, author
Matteo Fadel, and author
Yiwen Chu, title title Parity measurement in the strong dispersive regime of
circuit quantum acoustodynamics, @noop journal
journal Nature Physics volume 18, pages 794–799 (year 2022)NoStop
[Kwon et al.(2021)Kwon,
Tomonaga, Lakshmi Bhai, Devitt, and Tsai]Kwon2021GateBased
author author Sangil Kwon, author Akiyoshi Tomonaga, author Gopika Lakshmi Bhai, author Simon J. Devitt, and author Jaw-Shen Tsai, title title Gate-based
superconducting quantum computing, @noop journal
journal Journal of Applied Physics volume 129, pages 041102 (year
2021)NoStop
[Rol et al.(2020)Rol,
Ciorciaro, Malinowski, Tarasinski, Sagastizabal, Bultink,
Salathe, Haandbaek, Sedivy, and DiCarlo]Rol2020PulseDistorsion
author author M. A. Rol, author L. Ciorciaro,
author F. K. Malinowski,
author B. M. Tarasinski,
author R. E. Sagastizabal,
author C. C. Bultink, author Y. Salathe, author
N. Haandbaek, author
J. Sedivy, and author
L. DiCarlo, title title Time-domain characterization and correction of on-chip
distortion of control pulses in a quantum processor, @noop
journal journal Applied Physics Letters volume 116, pages 054001 (year 2020)NoStop
[Li et al.(2017)Li,
Xue, Tan, Liu, Dai, Zhang, Yu, and Yu]Li2017Ensemble
author author Mengmeng Li, author Guangming Xue, author Xinsheng Tan, author Qiang Liu, author Kunzhe Dai,
author Ke Zhang, author Haifeng Yu, and author Yang Yu, title
title Two-qubit state tomography with ensemble
average in coupled superconducting qubits, @noop journal journal Applied Physics Letters volume 110, pages 132602 (year
2017)NoStop
|
http://arxiv.org/abs/2307.03869v1 | 20230708004501 | Sketch-A-Shape: Zero-Shot Sketch-to-3D Shape Generation | [
"Aditya Sanghi",
"Pradeep Kumar Jayaraman",
"Arianna Rampini",
"Joseph Lambourne",
"Hooman Shayani",
"Evan Atherton",
"Saeid Asgari Taghanaki"
] | cs.CV | [
"cs.CV"
] |
empty
[
: Zero-Shot Sketch-to-3D Shape Generation
Aditya Sanghi Pradeep Kumar Jayaraman[3] Arianna Rampini[3] Joseph Lambourne
Hooman Shayani Evan Atherton Saeid Asgari Taghanaki
Autodesk Research
=====================================================================================================================================================================================
type=figure
< g r a p h i c s >
is a zero-shot sketch-to-3D generative model. Here we show how our method can generalize across voxel, implicit, and CAD representations and synthesize consistent 3D shapes from a variety of inputs ranging from casual doodles to professional sketches with different levels of ambiguity.
]
Significant progress has recently been made in creative applications of large pre-trained models for downstream tasks in 3D vision, such as text-to-shape generation.
This motivates our investigation of how these pre-trained models can be used effectively to generate 3D shapes from sketches, which has largely remained an open challenge due to the limited sketch-shape paired datasets and the varying level of abstraction in the sketches.
We discover that conditioning a 3D generative model on the features (obtained from a frozen large pre-trained vision model) of synthetic renderings during training enables us to effectively generate 3D shapes from sketches at inference time.
This suggests that the large pre-trained vision model features carry semantic signals that are resilient to domain shifts, i.e., allowing us to use only RGB renderings, but generalizing to sketches at inference time.
We conduct a comprehensive set of experiments investigating different design factors and demonstrate the effectiveness of our straightforward approach for generation of multiple 3D shapes per each input sketch regardless of their level of abstraction without requiring any paired datasets during training.
§ INTRODUCTION
Throughout history, humans have used drawings and other visual representations to communicate complex ideas, concepts, and information.
As hand-drawn sketches have a high level of abstraction, they allow unskilled artists or even young children to convey semantic information about 3D objects <cit.>, while providing trained professionals with a way to quickly express important geometric and stylistic details of a 3D design.
The ability to create 3D models which can capture the essence of simple doodles while accurately reproducing 3D shapes described by concept design sketches, will make 3D modelling more accessible to the general public, while allowing designers to rapidly explore many different design ideas and create virtual models that more accurately reflect the shape, size, and characteristics of real-world objects and environments.
Previous studies have endeavored to employ deep learning techniques in generating 3D shapes from sketches <cit.>, yet there are several limitations that hinder their widespread application.
Firstly, there is a lack of (Sketch, 3D shape) paired data at a large scale which forces most methods to be trained on synthetic datasets or data collected on only few categories.
Even when a small number of categories of paired sketch-shape data has been collected <cit.>, current methods fail to generalize to different levels of abstractions in the sketches, ranging from casual doodles to detailed professional drawings.
Finally, most of the present methods incorporate strong inductive biases, such as view information <cit.>, differentiable rendering <cit.> and depth estimation <cit.>, thereby constraining their generalizability across 3D representations.
To overcome the challenge of limited availability of paired data, a potential solution is to use prior knowledge encoded in large pre-trained image-text models. Recently, these large pre-trained models have been successfully applied to the 3D domain in creative ways, such as guiding the optimization of differentiable 3D representations <cit.> or to generate 3D shapes from text prompts using interchangeability of text-image embeddings <cit.>, or using them for representation learning <cit.>.
In this paper, we introduce a straightforward yet effective approach called , for generating 3D shapes from sketches in a zero-shot setting using pre-trained vision models. Our method is based on the idea that 3D shape rendering features derived from large-scale pre-trained models (such as CLIP <cit.> and DINOv2 <cit.>) possess robust local semantic signals that can withstand domain shifts from renderings to sketches.
In , we first train a VQ-VAE to acquire shape embeddings. Following this, a masked transformer is trained to model the distribution of shape embeddings conditioned on local semantic features from an image encoder that is pre-trained and frozen. During inference, the masked transformer is conditioned on local semantic features of the sketch instead, in order to produce the 3D shape. Our findings suggest that with some architectural design choices, this straightforward method enables us to generate several 3D shapes that can generalize across sketches of varying complexities.
To sum up, we make the following contributions:
* We propose , the first zero-shot approach for sketch-to-3D generation, leveraging large-scale pre-trained models to outdo the need of paired sketch-3D dataset.
* We experimentally show the generalization capability of our method among various datasets (<ref>) with different levels of sketch abstraction, going from simple doodles to professional sketches.
* We conduct thorough experiments to examine the different components of that contribute to the success of zero-shot shape generation via sketch.
§ RELATED WORK
3D Generative Models.
Significant progress has been made in the field of generative models for the creation of 3D shapes in various formats such as voxels <cit.>, CAD <cit.>, implicit representations <cit.>, meshes <cit.>, and point clouds <cit.>. Recent research on 3D generative models has focused primarily on the development of generative models based on VQ-VAE <cit.>, GAN<cit.>, or diffusion models <cit.>.
The present study concentrates on connecting the sketch modality with 3D shapes across three different 3D representations: voxels, CAD, and implicit representation. Although our approach is based on VQ-VAE, it can be easily extended to GAN or diffusion-based generative models.
3D Zero-Shot Learning. Large pre-trained language and 2D vision models have been creatively used in several downstream 3D vision tasks. Initial works focused on using vision-text models such as CLIP <cit.> for 3D shape generation using text <cit.>, optimizing nerfs <cit.>, deforming meshes <cit.>, stylizing meshes <cit.> and animating avatars <cit.> . More recently, text-to-image models such as Stable Diffusion <cit.> and Imagen <cit.>, have been used for text-to-shape generation <cit.>, single-view reconstruction <cit.>, and adding texture to 3D shapes <cit.>. To the best of our knowledge, our work is the first to explore zero-shot 3D shape generation from sketches by leveraging a pre-trained model.
3D Shape Generation from Sketch.
Several supervised learning methods have been used to generate 3D shapes from sketches. Works such as <cit.> use a neural net to estimate depth and normals from a set of viewpoints for a given sketch, which are then integrated into a 3D point cloud. <cit.> proposes to use a CNN to predict the initial shape and then refine the shape using novel viewpoints using another neural network. Another work <cit.> represent the 3D shape and its occluding contours in a joint VAE latent space during training which enables them to retrieve a sketch during inference and generate a 3D shape. Sketch2Mesh <cit.> uses an encoder-decoder architecture to represent and refine a 3D shape to match the target external contour using a differentiable render. Methods such as <cit.> employ a domain adaption network between unpaired sketch and rendered image data to boost performance on abstract hand-drawn sketches. To address the ambiguity problem of sketches, <cit.> introduces an additional encoder-decoder to extract separate view and shape sketch features, while <cit.> proposes a sketch translator module to fully exploit the spatial information in a sketch and generate suitable features for 3D shape prediction.
Recently, <cit.> trains a diffusion model for generation of 3D point clouds conditioned on sketches using a multi-stage training, and fine-tuning technique. However, we take the novel approach of not training on paired shape-sketch data at all and instead rely on the robustness of the local semantic features from a frozen large pre-trained image encoder such as CLIP.
§ METHOD
Our approach strives to generate 3D shapes from sketches of different complexities, solely employing synthetic renderings, and without the requirement of a paired dataset of sketches and shapes. The training data consists of 3D shapes, each denoted by 𝐒, which can be represented as a voxel grid, implicit (e.g. occupancy), or CAD, and their 𝐫 multi-view renderings (𝐈^1:r).
Our approach involves two training stages. In the first stage, the shapes are transformed into discrete sequences of indices (shape embeddings), denoted by 𝐙, using a discrete auto-encoder framework <cit.>. In the second stage, the distribution of these indices is modeled using a transformer-based generative model that is conditioned on features of shape renderings obtained from a frozen pre-trained model. These shape rendering features are a grid of local features from the frozen pre-trained model which are converted into a sequence of local features then conditioned to the transformer through a cross-attention mechanism.
During inference, we use an iterative decoding scheme <cit.> to generate the shape indices iteratively based on features of the sketch. Once the shape indices are generated, we can use the decoder of the auto-encoder to generate the 3D shape. The overall process is illustrated in Figure <ref> .
§.§ Stage 1: Training Discrete Auto-encoder
In the first stage, we use an auto-encoder framework to capture the shape distribution into a compressed sequence of discrete indices (shape embeddings) among various modalities. To achieve this, we opt for the Vector Quantized Variational Auto-encoder (VQ-VAE) architecture <cit.> which efficiently models the 3D shape in a compressed latent space, circumventing posterior collapse and enabling the generation of high-quality 3D shapes. The dataset of 3D shapes 𝐒, are transformed using an encoder, E(.), into a sequence of discrete indices 𝐙, pointing to a shape dictionary, which their distributions are then modeled in stage 2 using a transformer-based generative model. This is shown below:
𝐙 = VQ(E(𝐒)), 𝐒^' = D(𝐙)
The shape 𝐒^' is then generated from 𝐙 using a decoder, D(.), with a reconstruction loss L_rec(S, S^'). We also use the commitment loss <cit.> to encourage encoder output E(.) commits to
an embedding in the shape dictionary, and exponential moving average strategy <cit.> to encourage dictionary enteries to gradually be pulled toward the encoded features.
When dealing with voxel representation, we leverage a 3D convolution based on the ResNet architecture <cit.> for both the encoder and decoder network. Whereas with implicit representation, we rely on a ResNet-based encoder and an up-sampling process for the decoder that generates a higher resolution volume, which is then queried locally to obtain the final occupancy <cit.>. In the case of CAD representation, we use the SkexGen VQ-VAE architecture <cit.>. More details of the architectures are provided in the supplementary material.
§.§ Stage 2: Masked Transformer
The goal of stage 2 is to train a prior model which can effectively generate shape indices conditioned on a sketch at inference time.
We achieve this by modelling the sequence of discrete indices (shape embedding 𝐙), produced from stage 1, using a conditional generative model. We use a bi-directional transformer <cit.> based network which is conditioned on the features of the synthetic 3D renderings using a cross-attention mechanism.
During training, we randomly mask a fraction of shape indices with a special mask token <cit.> to produce 𝐙^msk. The training objective then becomes how to fully unmask the masked indices using the help of the provided conditional information. The training objective is to minimize:
L_mask= - 𝔼_Z,C ∈ D [log p(𝐙|𝐙^msk, 𝐂)]
Here, 𝐂 represents the conditional information from a given shape 𝐒 which are obtained from the multi-view image renderings of the 3D shape. At each iteration, we randomly sample a view to render an image of the 3D shape, which is then converted to local features using a locked pre-trained model.
The choice of pre-trained model is an important design criteria which we investigate thoroughly in Section <ref>, and find that using large models trained on diverse data produces the most robust semantic local features which allow domain shift from synthetic renderings to sketches.
The local features sequence can be obtained from different parts of the pre-trained network, which we investigate in Section <ref>. Our findings indicate that utilizing the feature grid output of the deeper layers in the pre-trained models yields better results. This is because deeper layers generate more semantic features, and the grid structure of the feature preserves its local properties. We convert this grid into a sequence using a mapping network comprising of several MLP layers. Finally, we take features obtained and add learnable positional encoding before applying cross-attention with the shape indices' features at each transformer layer. The choice of conditioning is also an important design choice which we discuss in Section <ref>. Additionally, we replace the local features sequence with a null embedding sequence 5% of the time to allow for classifier-free guidance during inference.
§.§ Inference
During the generation phase, we first convert the sketch into a sequence of local features using the same frozen pre-trained model utilized during training. These local features are semantically robust and serve as the conditioning query for the transformer. We employ an iterative decoding scheme with a cosine schedule, similar to the one proposed in Mask-GIT <cit.>. The process begins with a completely masked set of indices, which are gradually unmasked in parallel using the conditional information provided by the local features sequence from the sketch. At each time step, the transformer predicts the complete unmasked shape sequence, of which a specific fraction of the highest confidence masked tokens are accepted. These selected tokens are designated as unmasked for the remaining steps, while the rest of the tokens are reset to masked, except for the already unmasked tokens from the previous steps. For each time step, we also apply classifier-free guidance <cit.> with a guidance scale of 3. This process continues until all the tokens are unmasked. Finally, the completely unmasked tokens are converted into the 3D object using the shape decoder trained in stage 1. It is worth noting that we can restart the same process multiple times to generate different 3D shapes for the same sketch query.
§ EXPERIMENTS
In this section, we present the results of our experiments evaluating the accuracy and fidelity of the generated output produced by our model. We conducted each experiment three times for each metric and reported the average result for each. The experimental setup details are provided in the supplementary material with additional results that may be of interest.
Training Dataset.
Our experimentation utilizes two subsets of the ShapeNet(v2) dataset <cit.>. The first subset, ShapeNet13, consists of 13 categories from ShapeNet, which were also employed in previous studies <cit.>. In line with Sketch2Model <cit.>, we adopt the same train/val/test partition. The second subset, ShapeNet55, includes all 55 categories of ShapeNet and we follow the same split as <cit.>. We use the DeepCAD <cit.> dataset to train our CAD model.
Evaluation Sketch Dataset. One advantage of our method is that it's not trained on paired (shape, sketch) datasets. Therefore, to comprehensively evaluate its performance, we tested it on various sketch datasets that range from professional to non-expert sketches. Specifically, we utilized the ShapeNet-Sketch dataset <cit.>, which comprises 1300 free-hand sketches across ShapeNet13. In addition, we employed the ImageNet-Sketch dataset <cit.>, which contains 50 sketch images for 1000 ImageNet classes obtained from Google, encompassing a range of professional to non-expert sketches. Moreover, we utilized the TU-Berlin Sketch dataset <cit.>, which includes 20,000 non-expert sketches of 250 object categories. Lastly, QuickDraw Dataset <cit.> is a collection of 50 million drawings across 345 categories, contributed by players of the game Quick, Draw! <cit.>.
ImageNet-Sketch, TU-Berlin Sketch, and QuickDraw datasets also lack ground-truth 3D models, and we only utilized the categories of ShapeNet for these datasets. To evaluate our CAD model we use synthetic edge map sketches but don't train the model using edge maps as augmentation.
Evaluation Metrics. To evaluate our method on different sketch datasets we use two metrics: classification accuracy and human evaluation which are outlined below.
0em
* Classifier Accuracy. As we are dealing with sketch data that lacks ground-truth 3D models, we use the Accuracy (Acc) metric to ensure that the generated shape for a given sketch corresponds to its category. To achieve this, we employ a pre-trained shape classifier, as implemented in <cit.>. We use this metric for all datasets: ImageNet-Sketch <cit.>, TU-Berlin <cit.>, ShapeNet-Sketch <cit.>, and QuickDraw <cit.>. We refer to this metric as IS-Acc, TU-Acc, SS-Acc, and QD-Acc, respectively.
As our method can generate multiple shape per sketch query, we report the mean across 5 sampled shapes for a given sketch query.
* Human Perceptual Evaluation. We also use Amazon SageMaker Ground Truth and crowd workers from the Mechanical Turk workforce <cit.> to evaluate how well our generated 3D models preserve important geometric and stylistic details from the sketches.
§.§ Qualitative Results
In <ref>, we visualize sample generated 3D shapes in different representations such as voxel, implicit, and CAD from sketches of different domains. As shown, our method performs reasonably well on different types of sketches (from simple to professional drawings), particularly when there is ambiguity (such as the view angle of drawings) given the nature of 2D sketches.
§.§ Human Perceptual Evaluation
In addition to generating shapes in the same broad object category as abstract hand drawn sketches, our method is also able to incorporate geometric and stylistic details from a sketch or concept design into the final 3D model. To demonstrate this quantitatively, we run a human perceptual evaluation using Amazon SageMaker Ground Truth and crowd workers from the Mechanical Turk workforce <cit.>. We evaluate 691 generated models, conditioned on sketches from TU-Berlin <cit.>, ShapeNet-Sketch <cit.>, ImageNet-Sketch <cit.> and QuickDraw <cit.>.
The human evaluation is posed as a two-alternative forced choice study <cit.>. The crowd workers are shown images with a sketch on the left hand side and two images of generated 3D models on the right hand side. An example is shown in <ref>. One of the generated models was conditioned on the sketch shown, while the other was conditioned on a randomly selected sketch from the same object category. The crowd workers are asked the question “Which of the 3D models on the right hand side best matches the sketch on the left hand side?". The study is designed to measure the extent to which humans perceive our generated 3D models as preserving the shape and stylistic details presented in the sketch, as opposed to simply creating a model from the same object category.
We show each image to 7 independent crowd workers and count the number of images for which 4 or more of them correctly identify the 3D model which was conditioned on the sketch. The results are shown in <ref>. On average 71.1% of our generated 3D models are correctly identified by a majority of the crowd workers. We note that the sketches in TU-Berlin and ShapeNet-Sketch give rise to generations which were easier for the crowd workers to identify, with 74.9% and 73.1% being selected correctly. While these sketches often have a high level of abstraction, they communicate enough detail about the shape for our method to create distinctive 3D models which humans can identify. While ImageNet-Sketch contains superior artwork, often with shading, shadows and other cues to the 3D nature of the shapes, many of the pictures contain full scenes with backgrounds and additional superfluous details. This makes the generation of single objects more challenging, which is reflected by the fact that only 68.1% are correctly identified by the crowd workers. We note qualitatively that in cases where shaded sketches do not contain backgrounds or additional clutter the generated results look better, indicating the utility of our method for quickly generating 3D models from concept designs. The sketches in the QuickDraw dataset are sourced from the from the online game Quick, Draw! <cit.>, in which contributors are asked to drawn a shape in less than 20 seconds. QuickDraw is the most abstract and noisy sketch dataset, with many of the sketches being drawn with a computer mouse. While our method typically generates 3D shapes of the correct category, only 67.9% of the generations are correctly identified by the crowd workers.
§.§ Comparison with Supervised Models
As there is currently a lack of zero-shot methodologies for generating shapes from sketches, we compared our results to those of a supervised approach called Sketch2Model <cit.>, which was trained on a dataset of paired sketch-3D shapes. We evaluated both methods using our classifier accuracy metric, and the results are presented in <ref>. Our model was not exposed to any sketch-3D pairings during training, but it displays superior generation capabilities compared to Sketch2Model across different datasets.
We attribute this difference in performance to several factors. Firstly, we believe that Sketch2Model may be more effective for single-category training rather than for the 13 categories in the ShapeNet dataset. Additionally, because Sketch2Model is a supervised method, it was not exposed to out-of-distribution sketches during training, which may have caused its performance to deteriorate. We provide further details and qualitative comparison with Sketch2Model and other supervised methods in the supplementary material.
§.§ Investigating Pre-Trained Models
This section involves an extensive study of several pre-trained models that are open-sourced and trained on different datasets. The results are present in Table <ref>.
There are 3 major things we investigate through this experiment.
First, we investigate the importance of utilizing local grid features of pre-trained models. Typically, pre-trained models possess a global projection vector that is employed for downstream tasks like classification. We compare the efficacy of conditioning our generative model with the global projection vector (row 1) versus the local grid features (row 2). Our findings demonstrate that leveraging local grid features yields better performance compared to the global projection vector for most of the datasets. Furthermore, even from a visual standpoint, we observe that local grid features preserve more local details. It is worth noting that these accuracies are further improved by utilizing classifier-free guidance (CFG), as illustrated in row 3.
Next, we investigate the role of size of pre-trained models and find that increasing the size of the same class of pre-trained model, despite being trained on the same data, results in better zero-shot performance. This phenomenon is evident in the case of the ViT-based <cit.> CLIP model, where upgrading from the B-32 model to the L-14 model yields a significant improvement in performance. This trend is also observed in the ResNet-based <cit.> models. Interestingly, it is worth mentioning that the ResNet-based <cit.> models perform worse than their corresponding ViT-based <cit.> CLIP models. This could be attributed to the ResNet models' emphasis on high-frequency, textural features <cit.>.
Finally, we explore how different datasets impact the training of these models. Our findings indicate that the model's performance remains comparable when trained on extensive datasets such as LAION-2B <cit.>, DINOv2 Dataset <cit.> or OpenAI internal dataset <cit.>. However, when we reduce the dataset size significantly, such as in the case of the masked autoencoder <cit.> trained on 400 times less data from ImageNet <cit.>, its performance significantly declines. Despite being trained on the reconstruction objective, we believe that the masked autoencoder's performance drop is primarily due to the significantly reduced dataset size, as it still performs reasonably well on this task. Additionally, it is important to highlight that language supervision is unnecessary to acquire resilient features from extensive pre-trained models, as demonstrated by the outcomes of DINOv2.
§.§ Accuracy across Different Layers
In this experiment, we explore the optimal layer of the vision transformer (L-14 model) from which we extract the local conditioning features. Table <ref> summarizes our findings. We note that as we delve deeper into the vision transformer architecture, the features extracted from the deeper layers contain more significant semantic information leading to higher accuracy. Moreover, this indicates that the model maintains the positional correlation between patches instead of treating them as global information repositories as visually we can see local semantic generation.
§.§ Design Choices for Conditioning
Table <ref> presents our investigation into the impact of the mapping network's size and the attention mechanism used for conditioning the image features to the transformer. Our results show that incorporating a mapping layer does enhance the model's performance, with the optimal number of MLP layers being two. Furthermore, our findings suggest that cross-attention with a learnable positional embedding is the most effective conditioning mechanism, as evidenced by the deteriorated performance on removing positional embedding or using self attention as shown in the last two rows of the table.
§.§ Effect of Augmentation
In our final investigation, we explore whether the addition of data augmentation improves the accuracy of shape generation across datasets. The results are summarized in Table <ref>. We make two noteworthy observations. Firstly, even without data augmentation, our method performs relatively well, indicating the robustness of pre-trained models. Secondly, different types of augmentations have a more significant impact on certain datasets than others. For instance, affine transformation significantly enhances the performance of QuickDraw and ImageNet Sketch, while canny edge augmentation is more effective for the ShapeNet Sketch dataset. Consequently, we decide to train a network with all augmentations and find that, on balance across datasets, it performs the best.
§ CONCLUSION
In this paper, we demonstrate how a 3D generative model conditioned on local features from a pre-trained large-scale image model such as CLIP can be used to generate 3D shapes from sketches. We show how this method can generate multiple shapes for different abstraction of sketches and can be applied to multiple 3D representations.
Future work will involve training on much larger and diverse 3D shape datasets and consequently testing on different styles of sketches and levels of details.
ieee_fullname
[
Supplementary Material
]
§ HUMAN PERCEPTUAL EVALUATION
In Subsection 4.2 (main paper) we show the results of the human perceptual study broken down according to the dataset which the target sketch came from. In addition, the results can be broken down based on object category of the target sketch, as shown in <ref>. We see a wide range of performance across the different object categories, with “lamps" being correctly identified 89.1% of the time, while phones are identified just 52.5% of the time, little better than random. Categories which perform well, such as chair, table and sofa, tend to have distinctive shapes which are easy to communicate with sketches. Categories like airplane and gun produce good models, but these are not distinctive enough for the human evaluators to distinguish the correct 3D model from a random model of in the same category. Lower performance on these categories may also relate to the difficultly of drawing objects of these types. We believe having texture can further improve the human perceptual results.
As each generated model is rated by 7 individual crowd workers, we can count the number of raters who correctly identified the generated model, giving us a “shape recognizably score" from 0 to 7. In <ref> we show examples from selected categories with the highest and lowest “shape recognizably scores". For “airplane" category the least recognizable model appears to be in the wrong category, due to the unusual orientation of the sketch. The most and least recognizable sketch in the “bench" category both come from the Imagenet-Sketch dataset. The sketch for the most recognizable model contains a single bench while the sketch for the least recognizable model also contains background elements like trees and a lampost. For the “gun" category the most recognizable model is actually from a sketch which looks nothing like a gun. The least recognizable model is a generic gun which does not closely follow the shape of the sketch. The Figure shows how the human evaluation is measure the ability of our method to generate distinctive shapes reflecting the geometry of the sketches as well as general generation quality.
§ COMPARISON WITH SUPERVISED METHODS
§.§ Quantitative comparison
We evaluate the quality of generated shapes on the ShapeNet-Sketch dataset <cit.> using Intersection over Union (IOU) with 32^3 voxel shapes, as shown in <cit.>. This is the only dataset among the four we assessed that includes ground truth 3D voxels. We compare our results to those of other supervised methods presented in Table <ref>, exactly as in <cit.>. Our generative model generates 5 shapes based on a given sketch query in the ShapeNet-Sketch dataset and averages the IOU results. Although our method is not trained on any sketch data, it outperforms the supervised baseline. This indicates that the pre-trained model's learned features are effective in enabling our method to generate 3D shapes using sketches in a zero-shot manner.
§.§ Qualitative comparison
We additionally provide a qualitative comparison with Sketch2Model <cit.> and SketchSampler <cit.>.
For this comparison, we considered diverse sketches with different levels of abstraction in the same classes of ShapeNet from four datasets: TU-Berlin <cit.>, ShapeNet-Sketch <cit.>, ImageNet-Sketch <cit.>, and QuickDraw <cit.>. Implementation details can be found in <ref>. Results are in <ref>.
We can see that Sketch2Model reconstructed meshes often grasp the overall sketch shape, but they appear too smooth and lack geometric details. This method was originally intended for a single category scenario, as presented in the paper. However, this is often unpractical.
Similarly, SketchSampler fails to generalize to abstract or out-of-distribution sketches. The resulting point clouds present artifacts and outliers, especially in the direction of the point of view of the sketch (shapes proportion are only preserved when the point clouds are seen from this point of view). Unlike our approach, SketchSampler is designed for professional sketches only, with reliable shapes and fine-grained details.
Thus, it cannot deal with sketches with significant deformation or only expressing conceptual ideas, like the ones in QuickDraw <cit.>.
§ ARCHITECTURE AND EXPERIMENT DETAILS
Training Details. We use the Adam Optimizer <cit.> with a fixed learning rate of 1e-4 for training. The network is trained for 300 epochs during Stage 1 and for 250 epochs during Stage 2. We do not employ any learning rate scheduler during training. We train the 32^3 voxel model solely on the ShapeNet13 dataset, while the Implicit model is trained on the ShapeNet55 subset. The CAD model is trained on the DeepCAD dataset <cit.>. This is done to demonstrate the versatility and adaptability of our method to different datasets.
Stage 1 Details. For both the Implicit VQ-VAE and 32^3 VQ-VAE we use a codebook size of 512, a grid size of 8^3 and embedding dimensions of size 64. We employ the ResNet architecture for the 32^3 VQ-VAE, for both the encoder and decoder. In the case of Implict VQ-VAE, we use the ResNet architecture for the encoder whereas we use a decoder that produces a higher resolution volume, which is then queried locally to obtain the final occupancy <cit.>.
The pretrained VQ-VAE from SkexGen <cit.> is used for the CAD representation which is composed of three Transformer encoders and decoders for the topology, geometry and extrusions of a CAD model. The models output 4+2+4=10 codes, with a total codebook size of 1000.
Stage 2 Details.
For Stage 2, we utilize a bidirectional Transformer with 8 attention blocks, 8 attention heads, and a token size of 256. We use 24 renderings <cit.> for both the ShapeNet13 and ShapeNet55 experiments. During inference, we run the Transformer for 15 steps with classifier-free guidance, and the scale parameter is set to 3. The CLIP ViT-L/14 model is employed in all experiments, except in Table 3 of the main paper, where we conduct an ablation study over different pre-trained models. For all experiments, except Table 4, we incorporate cross-attention with learnable positional embedding and a mapping network consisting of 2 layers of MLP. We do not apply any augmentation for the quantitative experiments, except for the results presented in Table 6 and Table 2 of the main paper.
For the CAD results, we used a CLIP ViT-B/32 model.
Sketch2Model.
The authors of Sketch2Model released ShapeNet-Synthetic as the training dataset <cit.>, which consists of synthetic sketches of objects from 13 categories from ShapeNet. These objects have been rendered from 20 different views. For training Sketch2Model, we used the official implementation provided in <cit.>, along with the recommended hyperparameters. This implementation uses a step-type learning rate policy, beginning from 1e-4 and decreasing by 0.3 every 800 epochs, and trains for 2000 epochs with the Adam optimizer.
We trained the model on all 13 categories of ShapeNet-Synthetic using the same training/test split of the original paper.
SketchSampler.
This method employs as training dataset Synthetic-LineDrawing <cit.>, a paired sketch-3D dataset based on 3D models from ShapeNet. In our experiments, we used the official implementation, cited in the original paper <cit.>. In particular, we used the pre-trained model released by the authors, and pre-processed the input sketches to be in the same format of Synthetic-LineDrawing ones.
§ COMPARISON WITH POINT·E
Furthermore, we conducted a comparison between our work and Point·E <cit.>, as illustrated in the table provided below (Row 1). The results clearly demonstrate the superior performance of our method, indicating the merit of our design choices.
§ NATURAL IMAGES RESULTS
We explored the applicability of our method to natural images, as it is robust to domain shifts between renderings and sketches. The outcomes are depicted in Figure <ref>, indicating the efficacy of our method in generating natural images, including those with backgrounds. We believe that this discovery would be of interest to the Single View Reconstruction community.
§ FAILURE CASES
This section demonstrates the limitations of our method, as illustrated in Figure <ref>. The outcomes reveal that our method encounters difficulties in generalizing to shapes that are beyond those present in ShapeNet13, as depicted in the first row. Furthermore, our method also faces challenges when dealing with sketches that depict multiple shapes, as shown in the second row. Lastly, our method experiences difficulties in accurately reproducing the local details of shapes, which we consider to be an intriguing direction for future work.
§ SOCIETAL IMPACT
The societal impact of Sketch-to-3D technology can be significant in various fields such as architecture, product design, gaming, and entertainment. With the help of Sketch-to-3D technology, designers and architects can create realistic 3D models quickly and efficiently, reducing the overall time and cost of the design process.
However, it is important to note that the widespread adoption of Sketch-to-3D technology could also lead to job displacement in certain industries. As with any technological advancement, it is crucial to consider the potential social and economic impacts and work towards ensuring a smooth transition for workers and communities affected by such changes.
§ FUTURE WORK
We aim to concentrate on expanding this method to handle bigger 3D datasets for our future work. Additionally, we think that enhancing the Stage 1 VQ-VAE can aid in preserving the local features of the 3D shape. Lastly, an intriguing avenue to explore would be to combine sketch with text conditioning, resulting in a more adaptable generative model.
§ MORE QUALITATIVE RESULTS
Additional results are provided in Figure <ref> and Figure <ref>.
|
http://arxiv.org/abs/2308.01916v1 | 20230709040958 | Semi Supervised Meta Learning for Spatiotemporal Learning | [
"Faraz Waseem",
"Pratyush Muthukumar"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
CMDFusion: Bidirectional Fusion Network with Cross-modality Knowledge Distillation for LIDAR Semantic Segmentation
^1Authors are with Cheng Kar-Shun Robotics Institute, The Hong Kong University of Science and Technology, Hong Kong SAR, China. jcenaa}@connect.ust.hk. {cqf}@ust.hk.
^2Authors are with Alibaba Group, China. zhangjin.zsw, zh334251, luomaochun.lmc, yingya.zyy}@alibaba-inc.com. lk158400}@cainiao.com.
^3Authors are with the SMILES LAB at the School of Information and Communication Engineering'an Jiaotong University, Xi'an, China. peiyixuan}@stu.xjtu.edu.
^*Work done as an intern at Alibaba DAMO Academy.
Jun Cen^1,2*, Shiwei Zhang^2, Yixuan Pei^3, Kun Li^2, Hang Zheng^2, Maochun Luo^2, Yingya Zhang^2, Qifeng Chen^1
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Labeled data is hard to come by in the real world. Moreover, a majority of available data comes in the source of video and visual media.
Recent advancements in representation learning have shown great successes in learning rich representations from a variety of inputs including text, images, and videos.
However, these state-of-the-art architectures are data-intensive, whereas meta learning architectures possess unique capabilities of learning new tasks from diverse training tasks and corresponding labels in the few-shot regime.
We apply semi-supervised meta learning to video data for learning spatiotemporal patterns.
We extend work on Masked Autoencoders (MAEs) utilizing the Vision Transformer (ViT) architecture for scalable self-supervised learning in the spatiotemporal domain.
We approached the goal of applying meta-learning to self-supervised masked autoencoders for spatiotemporal learning in three steps.
Broadly, we seek to understand the impact of applying meta-learning to existing state-of-the-art representation learning architectures.
Thus, we test spatiotemporal learning through: a meta-learning architecture only, a representation learning architecture only, and an architecture applying representation learning alongside a meta learning architecture. We utilize the Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework.
Specifically, we first experiment with applying a pre-trained MAE and fine-tuning on our small-scale spatiotemporal dataset for video reconstruction tasks.
Next, we experiment with training an MAE encoder and applying a classification head for action classification tasks.
Finally, we experiment with applying a pre-trained MAE and fine-tune with MANN backbone for action classification tasks.
To execute our experiments, we generate a custom small-scale video dataset of 518 human-action classes consisting of 24927 video clips and human-generated annotations sourced from the MiniKinetics-200 and TinyVIRAT datasets. We also modify the existing ViT backbone in existing MAE architectures for small-scale datasets by applying Shifted Patch Tokenization (SPT) to combats the lack of locality inductive bias available in small-scale datasets.
Our experimental results show that fine-tuning on our custom small-scale video dataset outperforms existing pre-trained MAE architectures on video reconstruction tasks. Further, we find that training an MAE encoder with a small-scale ViT backbone on our small-scale video dataset for action classification tasks converges steadily. Finally, we find that applying a pre-trained MAE and fine-tuning with an MANN backbone for action classification tasks is effective on our small-scale video dataset test tasks.
§ INTRODUCTION
Recent advancements in deep learning including the Transformer architecture have shown great success in both vision and language domains learning rich representations from a variety of inputs including text, images, and videos (https://arxiv.org/abs/1706.03762ref: attention is all you need). Models such as BERT have shown success in the semi-supervised regime in denoising messy data and extracting high level embeddings from partially labeled datasets (https://arxiv.org/abs/1810.04805ref: bert). However, real-world labeled data in the format of videos is scarce and unstructured. State-of-the-art representation learning architectures have shown great success in the vision domain in extracting high-level features from images for reconstruction or classification tasks, however, these models require massive amounts of annotated vision data.
The field of meta learning has shown promise in learning high-level features from data in the few shot regime. Moreover, applying meta-learning to existing supervised learning architectures has been shown to allow for more data-efficient models while preserving generalizability to unseen tasks and datasets (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We propose applying semi-supervised meta-learning to video data for learning spatiotemporal patterns. We believe that wrapping existing state-of-the-art self-supervised representation learning architectures within a meta-learning framework will allow our architecture to both improve sample efficiency and generalize well to unseen data, particularly in the application of spatiotemporal learning on video datasets. Specifically, we perform experiments in the style of an ablation study to compare the performances of existing representation learning architectures for video data alone, existing self-supervised meta learning frameworks for video data alone, and our formulation of applying meta learning to representation learning architectures for video data classification tasks.
In addition to considering the effectiveness of applying meta learning towards existing representation learning architectures, we perform modifications to perform experiments with the scope of this project. That is, we scale down the vision transformer (ViT) backbone within the existing representation learning architecture for training on our custom small-scale video dataset. We generate this dataset consisting of video clips describing human-object interactions as well as corresponding human-generated annotations.
In this project, we make the following contributions:
* We collect a custom small-scale human-object video dataset built as a composite dataset from existing human-object video sources upon which we preprocess.
* We apply the meta-learning framework to existing self-supervised representation learning architectures and apply our model to downstream tasks including video reconstruction and action classification
* We perform an ablation study to understand the impact of applying meta-learning to existing self-supervised representation learning architectures on action classification accuracy and video reconstruction loss
§ RELATED WORKS
Prior work in the field of representation learning has shown successes in learning rich representations from vision and language domains. Particularly, autoencoder architectures have been proven to be effective in extracting representations from text and images. (https://arxiv.org/abs/2111.06377ref: masked autoencoders are scalable vision learners) proposed applying masked autoencoders (MAEs) for self-supervised learning for vision. By masking random patches of the input images and pre-training an autoencoder to reconstruct the missing pixels, they found that the architecture was able to perform well on the ImageNet dataset compared to similar self-supervised models. Moreover, their architecture was more efficient and scalable for larger models such that transfer performance in downstream tasks outperformed supervised pre-training models. They noted that a masking ratio larger than 75% masked pixels in an image poses as a non-trivial task to current state-of-the-art vision models.
(https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners) builds off of this work by applying masked autoencoders for video data to learn spatiotemporal patterns. The masking process follows similarly from above, however random spacetime patches of videos are masked out rather than pixels during the pre-training step. Their results showed that a masked autoencoder with a masked ratio of 90% outperforms supervised pre-training approaches by a wide margin on both benchmark datasets and real-world video data.
Meta-learning has shown effectiveness in generalizing well to unseen data with sample-efficient architectures in the few shot regime. One such implementation of meta-learning is the Memory Augmented Neural Network (MANN) architecture proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). The authors propose a black-box meta-learning framework with a two-part architecture. Their architecture included a controller implemented as a sequence model – they utilize an LSTM architure in their implementation – and an external memory module with reading and writing heads implemented with a Neural Turing Machine (NTM) (https://arxiv.org/abs/1410.5401neural turing machines). The LSTM sequence models are used to help a model to learn quickly from data with a small number of examples.
In our review of this space, we have not found existing work applying meta-learning alone towards self-supervised spatiotemporal learning. However, prior research has been done on applying self-supervised meta learning for natural language classification tasks (https://aclanthology.org/2020.emnlp-main.38/href: self-supervised meta-learning for few-shot natural language classification tasks).
Current vision models have become increasingly powerful since the widespread application of the Transformer architecture. The Vision Transformer (ViT) architecture, proposed by (https://arxiv.org/abs/2103.15691ref: ViViT: a video vision transformer), builds upon the self-attention mechanism proposed by (https://arxiv.org/abs/1706.03762ref: attention is all you need) for learning complex high dimensional representations from image datasets. This family of architectures relies on large amounts of image data, typically in the scale of hundreds of gigabytes worth of labelled images to train large architectures with hundreds of millions of parameters.
Some work has been done on scaling down these large-scale ViT architectures while preserving the learned high-level representations.(https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets) proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA) as methods to combat the lack of locality inductive bias available in small-scale datasets.
Existing work on applying representation learning architectures such as MAEs with ViT backbones show incredible performance in video classification and video reconstruction tasks, but are limited in real-world applications due to the data requirements of these sample inefficient architectures. Current research on small-scale ViT architectures perform well on image classification tasks, but have yet to be extended towards video data or applied in the regime of self-supervised learning.
§ METHODS
We approach the goal of applying meta-learning to self-supervised masked autoencoders for spatio-temporal learning using MANNs (memory augmented neural networks), in a similar fashion proposed by (https://dl.acm.org/doi/10.5555/3045390.3045585href: meta-learning with memory-augmented neural networks). In our case, we utilize the masked autoencoder (MAE) approach for initial pre-training, and then fine-tune using the MANN approach, using the MAE encoder as a backbone to the sequence model. In our implementation, we utilize the ViT sequence model scaled down and trained on our small-scale video dataset. We scale down the ViT backbone within the MAE encoder and decoder in a method proposed by (https://arxiv.org/abs/2112.13492ref: vision transformer for small-size datasets), however in their implementation, they focus on image data.
We consider the MAE method proposed by (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
) as a baseline for testing the performance of a state-of-the-art classification algorithm that does not use meta learning. We then train the MANN architecture with the ViT backbone end-to-end to evaluate the performance of a solely meta-learning based approach. Finally, we test our proposed combination of MAE with MANN fine-tuning to test if the MAE architecture in combination with meta-learning approaches is more effective in learning spatiotemporal patterns.
One benefit of applying meta-learning in this domain is that if we assume videos of humans interaction with objects share some high level structure, we can combine video clips from various human-object interaction datasets, allowing us to pre-train on more data. These combinations of benchmarks will allow us to pinpoint whether applying meta-learning with MAE is effective for spatiotemporal learning as well as the individual contributions of each.
To summarize, we devised a three-stage approach to reaching our proposed goals:
* Apply pre-trained MAE and fine-tune for video reconstruction downstream task
* Train MANN with MAE encoder on small-scale dataset and apply classification head for action classification downstream task
* Apply pre-trained MAE and fine-tune with MANN backbone for action classification downstream task
Describes a visualization of the model architectures for each of the three approaches we implement.
§ EXPERIMENTS
For the first approach in our technical method, we fine-tune the pre-trained MAE on our small-scale dataset and evaluate against the baseline video MAE model pre-trained on Kinetics-400. We utilize a pre-trained MAE architecture sourced from the authors of the video MAE architecture trained with the ViT-Large backbone on Kinetics-400 with a masking ratio of 90% and 1600 effective epochs (https://arxiv.org/abs/2205.09113ref: masked autoencoders as spatiotemporal learners
).
For the second approach in our technical method, we train the MAE autoencoder with our small-scale ViT and fine-tune with a classification head on our small-scale composite dataset. We ran experiments training the full video MAE as well as training the video MAE outfitted with a classification head. Additionally, we evaluate training the video autoencoder with and without masking to analyze the difference in training loss and classification accuracy. Note that the autoencoders used for training in this set of experiments utilize our small-scale ViT backbone which implements Shifted Patch Tokenization (SPT) to preserve locality-specific representations typically lost with small-scale datasets. Further, since the original work proposing small-scale ViT architectures implemented a small-scale ViT for image classification rather than video classification, we extend upon their work by including spacetime attention to their small-scale ViT architecture in order to support 3D video data in the format of time-indexed series of 2D images.
§.§ Datasets
For our experiments, we seek to perform spatiotemporal learning on video datasets. Initially, we started by utilizing the Kinetics-400 video dataset consisting of 400 human-action classes each with at least 400 video clips (https://arxiv.org/abs/2007.07355TinyVIRAT: low-resolution video action recognition). In total, the dataset consisted of 306,245 video clips each around 10 seconds in length with a resolution of 224 x 224 pixels. However, the size of this dataset is over 300 GB, and while it can be effectively used for the ViT-base backbone with 84,943,656 parameters within the MAE encoder of the existing state-of-the-art representation learning architecture for video learning, it was not a feasible dataset within the scope of our project. Instead, we developed a small-scale ViT backbone within the MAE encoder architecture which instead has 3,109,008 parameters. Correspondingly, we sought to scale down our video dataset used for training our small-scale ViT backbone.
One aspect we considered while building our dataset was since we apply the MANN meta-learning framework for self-supervised spatiotemporal learning, we can combine multiple datasets of varying action class distributions together into a composite dataset where each unique action class could be considered a new task during black-box adaptation with the MANN architecture. As a result, we were not limited to a single data source when constructing our small-scale dataset and instead, we utilized human-action video clips and annotations from a variety of input sources to generate our small-scale video dataset. In a semi-supervised dataset, labels are sparse, hence we hypothesize that a meta-learning based approach that learns quickly from a small number of examples can excel where standard fine-tuning may not be sufficient.
Our composite small-scale video dataset was sourced from the Kinetics-400, MiniKinetics-200, and TinyVIRAT datasets. MiniKinetics-200 is a subset of the Kinetics dataset consisting of the 200 human-action classes with the most training examples and TinyVIRAT is a video dataset containing real-life tiny actions in videos collected from low resolution video cameras consisting of 12829 video clips. Our small-scale video dataset contains 24,927 video clips amongst 518 human-action classes. Each video clip in our dataset consists of 100 frames at a temporal resolution of 10 FPS, meaning that each clip is around 10 seconds in length. We scale all clips in our dataset to a resolution of 64x64 pixels to perform efficient training and achieve our project goals with the computational resources available to us. All spatial and temporal resolution downscaling was performed using the OpenCV Python package.
We split our dataset into training and testing splits such that we reserve 18406 videos over 414 action classes for training and 6521 videos over 104 action classes for testing. For our implementation utilizing meta-learning for self-supervised spatiotemporal learning, each human-action class can be formulated as a distinct task, where our task training-testing split is roughly an 80-20 split. Kinetics-400, Mini-Kinetics200, and TinyVIRAT all include human-generated annotations of the video clips, which define the locations of individual video clips within the action classes.
§ RESULTS
For the first approach in our technical outline, we provide cross entropy loss results of a pre-trained MAE fine-tuned on our small-scale video dataset against a pre-trained baseline MAE architecture trained on Kinetics-400. For the sake of brevity, we provide experimental results for every 20th frame in the 100 frame video samples of our small-scale video dataset. We evaluate the pre-trained MAE baseline against our fine-tuned MAE model on the testing set of our small-scale video dataset consisting of 6521 100-frame video clips of 64x64 pixel resolution over 104 human-action classes. Table <ref> describes the averaged cross entropy loss for every 20th frame in the 100-frame video clips across the test set for our fine-tuned model compared against the pre-trained MAE baseline. The overall averaged cross entropy loss for all 100-frames across the test set in our pre-trained model was 0.1776, whereas the pre-trained MAE baseline was 0.1781.
We also provide a video reconstruction visualization for a single video in the testing split of our small-scale video dataset. Since we cannot show all 100 frames of this video reconstruction, we show a visualization of every 20th video frame reconstructed by our fine-tuned model in Figure <ref>.
For the second approach in our technical outline, we evaluate training our modified video MAE architecture with a small-scale ViT backbone end-to-end as well as training with a classification head attached for action classification tasks. These experiments were conducted on the TinyVIRAT dataset with 26 action classes, so we can formulate the experimental setting as a 26-way multi-class classification task. The end-to-end video MAE architecture with a small-scale ViT backbone contains 3.1 million parameters, while the video MAE architecture with the classification head contains 2.7 million parameters.
The top-1 accuracy for the end-to-end video MAE architecture with a small-scale ViT backbone was 37% and the top-5 accuracy was 75%. Figures <ref> and <ref> describe the training and validation curves of this end-to-end model. Note that since we do not normalize the loss value with the number of examples in the batch, the magnitude of the loss is not necessarily indicative of the model performance.
Additionally, we evaluate training the video auto encoder outfitted with a classification head with and without masking for our 26-way multi-class classification task. We consider a masking ratio of 80% when implementing masking. We find that the top-5 performance on the TinyVIRAT dataset is 76% with masking and 74.5% without masking. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification head with masking implemented. Figures <ref> and <ref> describe the training and validation curves for the video autoencoder with a classification hea trained without masking. Figures <ref> and <ref> describe the validation split accuracy curve over training for the masked autoencoder and the autoencoder without masking, respectively.
When using a video autoencoder with shift patch tokenization, and a reduced number of parameters, in only 10 epochs of pretrainign and 10 epochs of finetuning, we get 46.8% top1 accuracy, which is significantly higher than the previous methods we tested, indicating the importance of using shifted patch tokenization and not masking during the finetuning phase.
§ CONCLUSION
To summarize, we apply self-supervised meta-learning for spatiotemporal learning on video data. We extend existing representation learning architectures for vision and video data and apply meta-learning through the black-box Memory Augmented Neural Network (MANN) architecture. We evaluate the effectiveness of applying MANN alongside Masked Auto Encoders (MAE) by tackling our goals for this project in a three stage approach.
Firstly, we experiment with fine-tuning a pre-trained MAE architecture on our custom small-scale video dataset. This small-scale video dataset is built and collected by combining multiple human-action video datasets such as the TinyVIRAT, Kinetics-400, and MiniKinetics-200 datasets. Our experimental results of our fine-tuned model against a pre-trained MAE baseline shows that our model outperforms the pre-trained MAE architecture in terms of averaged cross entropy loss across all frames of the testing split videos in our small-scale dataset with a value of 0.1776 compared to the baseline's averaged cross entropy loss of 0.1781. However, since the difference between these two values are negligible – our fine-tuned model outperforms the baseline by 0.3% – we note that there is not a significant enough improvement from fine-tuning a pre-trained MAE architecture on our small-scale video dataset alone. We anticipated these results and hypothesize that because the pre-trained model is very large and trained on hundreds of gigabytes worth of Kinetics-400 data, whereas we fine-tune on our small-scale dataset consisting of less than 25,000 video clips, fine-tuning this architecture directly will not have a noticeable impact on predictive power. Nevertheless, our fine-tuned model slightly outperforms the baseline pre-trained MAE architecture, however there are not enough results or significant enough a difference to suggest a trend.
Next, we experiment with training an end-to-end video MAE architecture with a modified small-scale ViT backbone. We evaluated this architecture on the TinyVIRAT dataset and formulated the problem as a 26-way multi-class video classification problem. The top-1 accuracy score was 37% and the top-5 accuracy score was 75%. We believe this is a significant accomplishment because the majority of existing benchmarks for the TinyVIRAT challenge utilize very large encoder architectures with hundreds of millions of parameters. However, we are able to achieve competent results on the TinyVIRAT dataset with a small-scale ViT backbone with just 3 million parameters.
Finally, we experiment with training a video auto encoder architecture with a classification head and evaluating the effect of masking. We similarly evaluated both the masked and non-masked architectures on the TinyVIRAT 26-way multi-class video classification task and find that the top-5 performance for the masked auto encoder architecture with an 80% masking ratio was 76% and for the auto encoder without masking was 74.5%. Comparatively, this shows that applying masking to the architecture improves action-class classification task performance. However, with just 50 epochs used for training, we would need to continue running experiments and fine-tune the masking ratio hyperparameter to confirm this trend.
§ FUTURE WORK
In the future, we want experiment with fine tuning the MANN architecture with and without a pre-trained video MAE. Another test we want to try is to replace MANN with other meta-learning implementations such as Model Agnostic Meta-Learning (MAML) proposed by (https://arxiv.org/abs/1703.03400ref: model-agnostic meta-learning for fast adaptation of deep networks
). We can also experiment with integrating text signals such as utilizing BERT pretrained embeddings generated on descriptions of videos in the action-class classification task setting. We have performed significant contributions to the TinyVIRAT codebase and could consider contributing to open-source implementations by providing our codebase for small-scale video MAE and meta-learning capabilities. Additionally, we have introduced a hook to export latent video frame representations, which can be used for future work by us and others. We believe we have created very useful building blocks for building more advanced vision transformers for the spatiotemporal learning domain.
9
@articlevaswani2017attention,
title=Attention is all you need,
author=Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, Łukasz and Polosukhin, Illia,
journal=Advances in neural information processing systems,
volume=30,
year=2017
@articledevlin2018bert,
title=Bert: Pre-training of deep bidirectional transformers for language understanding,
author=Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina,
journal=arXiv preprint arXiv:1810.04805,
year=2018
@inproceedingsfinn2017model,
title=Model-agnostic meta-learning for fast adaptation of deep networks,
author=Finn, Chelsea and Abbeel, Pieter and Levine, Sergey,
booktitle=International conference on machine learning,
pages=1126–1135,
year=2017,
organization=PMLR
@articlekay2017kinetics,
title=The kinetics human action video dataset,
author=Kay, Will and Carreira, Joao and Simonyan, Karen and Zhang, Brian and Hillier, Chloe and Vijayanarasimhan, Sudheendra and Viola, Fabio and Green, Tim and Back, Trevor and Natsev, Paul and others,
journal=arXiv preprint arXiv:1705.06950,
year=2017
@articlelee2021vision,
title=Vision transformer for small-size datasets,
author=Lee, Seung Hoon and Lee, Seunghyun and Song, Byung Cheol,
journal=arXiv preprint arXiv:2112.13492,
year=2021
@inproceedingsxie2018rethinking,
title=Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification,
author=Xie, Saining and Sun, Chen and Huang, Jonathan and Tu, Zhuowen and Murphy, Kevin,
booktitle=Proceedings of the European conference on computer vision (ECCV),
pages=305–321,
year=2018
@inproceedingsdemir2021tinyvirat,
title=Tinyvirat: Low-resolution video action recognition,
author=Demir, Ugur and Rawat, Yogesh S and Shah, Mubarak,
booktitle=2020 25th International Conference on Pattern Recognition (ICPR),
pages=7387–7394,
year=2021,
organization=IEEE
@inproceedingshe2022masked,
title=Masked autoencoders are scalable vision learners,
author=He, Kaiming and Chen, Xinlei and Xie, Saining and Li, Yanghao and Dollár, Piotr and Girshick, Ross,
booktitle=Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,
pages=16000–16009,
year=2022
@inproceedingsarnab2021vivit,
title=Vivit: A video vision transformer,
author=Arnab, Anurag and Dehghani, Mostafa and Heigold, Georg and Sun, Chen and Lučić, Mario and Schmid, Cordelia,
booktitle=Proceedings of the IEEE/CVF International Conference on Computer Vision,
pages=6836–6846,
year=2021
@bansal2020self,
title=Self-supervised meta-learning for few-shot natural language classification tasks,
author=Bansal, Trapit and Jha, Rishikesh and Munkhdalai, Tsendsuren and McCallum, Andrew,
journal=arXiv preprint arXiv:2009.08445,
year=2020
@articlefeichtenhofer2022masked,
title=Masked Autoencoders As Spatiotemporal Learners,
author=Feichtenhofer, Christoph and Fan, Haoqi and Li, Yanghao and He, Kaiming,
journal=arXiv preprint arXiv:2205.09113,
year=2022
@inproceedingssantoro2016meta,
title=Meta-learning with memory-augmented neural networks,
author=Santoro, Adam and Bartunov, Sergey and Botvinick, Matthew and Wierstra, Daan and Lillicrap, Timothy,
booktitle=International conference on machine learning,
pages=1842–1850,
year=2016,
organization=PMLR
@articlegraves2014neural,
title=Neural turing machines,
author=Graves, Alex and Wayne, Greg and Danihelka, Ivo,
journal=arXiv preprint arXiv:1410.5401,
year=2014
|
http://arxiv.org/abs/2307.05547v1 | 20230709055046 | Robust Routing Made Easy: Reinforcing Networks Against Non-Benign Faults | [
"Christoph Lenzen",
"Moti Medina",
"Mehrdad Saberi",
"Stefan Schmid"
] | cs.DC | [
"cs.DC"
] |
Robust Routing Made Easy:
Reinforcing Networks Against Non-Benign Faults
Research supported by the Federal Ministry of Education and Research (BMBF), grant 16KISK020K, 2021-2025.
This article extends work presented at SSS
2017 <cit.>.
Christoph Lenzen^1 Moti Medina^2 Mehrdad Saberi^3 Stefan Schmid^4
^1CISPA Helmholtz Center for Information Security, Germany ^2Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
^3University of Maryland, College Park, USA ^4TU Berlin, Germany
August 12, 2023
===============================================================================================================================================================================================================================================================================
With the increasing scale of communication networks,
the likelihood of failures grows as well.
Since these networks form a critical backbone
of our digital society, it is important that they rely on
robust routing algorithms which ensure connectivity
despite such failures. While most modern communication
networks feature robust routing mechanisms, these mechanisms
are often fairly complex to design and verify, as they
need to account for the effects of failures and rerouting
on communication.
This paper conceptualizes the design of robust routing mechanisms,
with the aim to avoid such complexity. In particular,
we showcase simple and generic blackbox transformations that increase resilience of routing against independently distributed failures, which allows
to simulate the routing scheme on the original network, even in the presence
of non-benign node failures (henceforth called faults). This is attractive
as the system specification and routing policy can simply be preserved.
We present a scheme for constructing such a reinforced network, given
an existing (synchronous) network and a routing scheme. We prove that
this algorithm comes with small constant overheads, and only requires a minimal
amount of additional node and edge resources;
in fact, if the failure probability is smaller than 1/n,
the algorithm can come without any overhead at all.
At the same time,
it allows to tolerate a large number of
independent random (node) faults,
asymptotically almost surely.
We complement our analytical results with simulations on different real-world topologies.
§ INTRODUCTION
Communication networks have become a critical backbone
of our digital society. For example, many datacentric applications
related to entertainment, social networking, or health, among others,
are distributed and rely on the high availability and
dependability of the interconnecting network (e.g., a
datacenter network or a wide-area network).
At the same time, with the increasing scale of
today's distributed and networked systems (often relying
on commodity hardware as a design choice
<cit.>), the number of
failures is likely to increase as well
<cit.>.
It is hence important that communication networks can tolerate
such failures and
remain operational despite the failure of some of their
components.
Robust routing mechanisms aim to provide such guarantees:
by rerouting traffic quickly upon failures,
reachability is preserved. Most communication
networks readily feature robust routing mechanisms,
in the control plane (e.g.
<cit.>), in
the data plane (e.g. <cit.>), as well as on higher
layers (e.g. <cit.>).
However, the design of such robust routing mechanisms is
still challenging and comes with tradeoffs, especially if
resilience should extend to multiple failures <cit.>.
Besides a fast reaction time and re-establishing connectivity, the
resulting routes typically need to fulfill certain additional properties,
related to the network specification and policy.
Ensuring such properties however can be fairly complex,
as packets inevitably follow different paths after failures.
Interestingly, while the problem of how to re-establish reachability
after failures is well explored,
the problem of providing specific properties on the failover
paths is much less understood.
This paper conceptualizes the design of robust routing, presenting a new approach to robust routing which conceptually differs
significantly from existing literature by relying on proactive reinforcement (rather than reaction to failures).
In particular, our approach aims to overcome the complexities involved in designing
robust routing algorithms, by simply sticking to the original
network and routing specification.
To achieve this, our approach is to mask the effects of failures
using redundancy: in the spirit of error correction,
we proactively reinforce networks by adding a minimal number of
additional nodes and links, rather than
coping with failed components when they occur.
The latter is crucial
for practicability: significant refactoring of existing systems
and/or accommodating substantial design constraints is rarely
affordable.
In this paper, to ensure robustness while maintaining
the network and routing specification, we aim to
provide a high degree of fault-tolerance,
which goes beyond simple equipment and failstop failures,
but accounts for more general faults which include non-benign
failures of entire nodes.
While our approach presented in this paper will be general
and applies to any network topology, we are particularly
interested in datacenter networks (e.g., based on low-dimensional
hypercubes or d-dimensional tori <cit.>)
as well as in wide-area
networks (which are typically sparse <cit.>).
We will show that our approach works especially well for these networks.
§.§ The Challenge
More specifically,
we are given a network G=(V,E) and a routing scheme, i.e.,
a set of routes in G.
We seek to reinforce the network G by
allocating additional resources, in terms of nodes and edges,
and to provide a corresponding routing strategy to simulate the routing scheme
on the original network despite non-benign node failures.
The main goal is to maximize the probability that the network withstands
failures (in particular, random failures of entire nodes),
while minimizing the resource overhead.
Furthermore, we want to ensure that the network transformation is simple
to implement, and that it interferes as little as possible with the existing system design and operation, e.g., it
does not change the reinforced system's specification.
Toward this goal, in this paper, we make a number of simplifying assumptions.
First and most notably, we assume independent failures,
that is, we aim at masking faults with little or no correlation among each other.
Theoretically, this is motivated by the fact that
guaranteeing full functionality despite having f adversarially placed faults trivially requires redundancy (e.g., node degrees) larger than f.
There is also practical motivation to consider independent faults:
many distributed systems proactively avoid fault clusters
<cit.> and there is also empirical
evidence that in certain scenarios, failures are only weakly correlated <cit.>.
Second, we treat nodes and their outgoing links as fault-containment regions (according to <cit.>), i.e., they are the basic components our systems are comprised of.
This choice is made for the sake of concreteness;
similar results could be obtained when considering, e.g., edge failures, without changing the gist of results or techniques.
With these considerations in mind, the probability of uniformly random
node failures that the reinforced system can tolerate is a canonical choice for measuring resilience.
Third, we focus on synchronous networks, for
several reasons:
synchrony not only helps in handling faults, both on the theoretical level (as illustrated by the famous FLP theorem <cit.>) and for ensuring correct implementation, but it also
simplifies presentation, making it easier to focus on the proposed concepts.
In this sense, we believe
that our approach is of particular interest in the context of real-time systems,
where the requirement of meeting hard deadlines makes synchrony an especially attractive choice.
§.§ Contributions and Techniques
This paper proposes a novel and simple approach to robust routing,
which decouples the task of designing a reinforced network from the task of
designing a routing scheme over the input network. By virtue of this decoupling,
our approach supports arbitrary routing schemes and objectives,
from load minimization to throughput maximization and beyond,
in various models of computation, e.g., centralized or distributed, randomized
or deterministic, online or offline, or oblivious.
We first consider a trivial approach:
we simply replace each node by ℓ∈ copies
and for each edge we connect each pair of copies of its endpoints,
where ℓ is a constant.[Choosing concreteness over generality,
we focus on the, in our view, most interesting case of constant ℓ. It is straightforward to generalize the analysis.]
Whenever a message would be sent over an edge in the original graph,
it should be sent over each copy of the edge in the reinforced graph.
If not too many copies of a given node fail, this enables each receiving copy to recover the correct message.
Thus, each non-faulty copy of a node can run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph.
When analyzing this approach,
we observe that asymptotically almost surely (a.a.s., with probability 1-o(1)) and with ℓ=2f+1, this reinforcement can sustain an independent probability p of f Byzantine node failures <cit.>, for any p∈ o(n^-1/(f+1)), i.e., f nodes may violate the protocol in any arbitrary way (and may hence also collude).
This threshold is sharp up to (small) constant factors: for p∈ω(n^-1/(f+1)), a.a.s. there is some node for which all of its copies fail.
If we restrict the fault model to omission faults
(faulty nodes may skip sending some messages but otherwise act according to the protocol), ℓ=f+1 suffices.
The cost of this reinforcement is that the number of nodes and edges increase by factors of ℓ and ℓ^2, respectively.
Therefore, already this simplistic solution can support non-crash faults of probability p∈ o(1/√(n)) at a factor-4 overhead.
We note that the simulation introduces no large computational overhead and
does not change the way the system works, enabling to use it as a blackbox.
Also randomized algorithms can be simulated in a similar fashion,
provided that all copies of a node have access to a shared source of randomness.
Note that this requirement is much weaker than globally shared randomness:
it makes sense to place the copies of a node in physical proximity to approximately preserve the geometrical layout of the physical realization of the network topology.
Our approach above raises the question whether
we can reduce the involved overhead further.
In this paper, we will answer this question positively:
We propose to apply the above strategy only to a small
subset E' of the edge set.
Denoting by v_1,…,v_ℓ the copies of node v∈ V, for
any remaining edge {v,w}∈ E∖ E' we add only edges
{v_i,w_i}, i∈ [ℓ], to the reinforced graph.
The idea is to choose E' in a way such that the connected components
induced by E∖ E' are of constant size, yet |E'|=ε |E|.
This results in the same asymptotic threshold for p, while the number of edges of the reinforced graph drops to ((1-ε)ℓ+εℓ^2)|E|.
For any constant choice of ε, we give constructions with this property for grids or tori of constant dimension and minor-free graphs of bounded degree.
Again, we consider the case of f=1 of particular interest:
in many typical network topologies, we can reinforce the network to boost the failure probability that can be tolerated from Θ(1/n) to Ω(1/√(n)) by roughly doubling (omission faults) or tripling (Byzantine faults) the number of nodes and edges.
The redundancy in this second construction is near-optimal under the constraint that we want to simulate an arbitrary routing scheme in a blackbox fashion,
as it entails that we need a surviving copy of each edge, and thus in particular each node.
In many cases, the paid price will be smaller than the price for making each individual component sufficiently reliable to avoid this overhead.
Furthermore, we will argue that the simplicity of our constructions enables us to re-purpose the redundant resources in applications with less strict reliability requirements.
Our results show that while approach is general and can be applied to any
existing network topology (we will describe and analyze valid reinforcements for
our faults models on general graphs), it can be refined and is particularly
interesting in the context of networks that
admit suitable partitionings. Such networks include
sparse, minor-free graphs, which are practically relevant topologies in
wide-area networks, as well as torus graphs and low-dimensional
hypercubes, which arise in datacenters and parallel architectures.
To complement our theoretical findings and investigate the reinforcement
cost in real networks, we conducted experiments on the Internet Topology Zoo <cit.>.
We find that our approach achieves robustness at significantly lower cost compared to
the naive replication strategy often employed in dependable networks.
§.§ Putting Things Into Perspective
In contrast to much existing robust routing literature on reactive
approaches to link failures <cit.> (which come with a delay),
we consider a proactive approach by enhancing the network with redundancy.
Our proactive approach also allows us to replicate the routing scheme (and hence the network policy) on the new network.
In particular, we show that if the failure probability is smaller than 1/n, there is a good probability that our approach works even without any overhead at all.
Furthermore, there are two ways in which our system can be used. One approach is to replicate the entire node (including the compute part), and then forward the traffic to its two associated peers. Alternatively, traffic can also simply be replicated to multiple NICs, without additional compute requirements, depending on the failure model. More generally, our contribution can also be seen more abstractly and the robust routing happen on a logical level, depending on the failure scenario.
Also, we show that in the absence of a valid message, it can simply be ignored, as the rest of the system continues to perform
The most closely related work to ours is NetCo <cit.>,
which also relies on network reinforcement and can handle malicious behavior.
NetCo is is based on a robust
combiner concept known from cryptography, and complements each router with two additional routers.
Using software-defined networking, traffic is replicated across the three (untrusted) devices and then merged again, using a consensus algorithm. While a high degree of robustness is achieved, the three-fold overhead is significant. More importantly, however, in contrast to our approach, Netco requires special hardware for splitting and merging the traffic; while the functionality of this hardware can be simple, it still needs to be trusted. The consensus requirement dramatically reduces the throughput, as shown in the empirical evaluation of NetCo in <cit.>.
Our solution does not require such components and is hence not only more practical but also significantly more performant.
§.§ Organization
In <ref>, we sketch the properties of our approach and state a number of potential applications. In <ref>, we formalize the fault models that we tackle in this article alongside the notion of a valid reinforcement and its complexity measures. In <ref> and <ref>, we study valid reinforcements on general graphs, and in <ref>, we study more efficient reinforcements for specific graphs.
We complement our analytical results with an empirical simulation study in
<ref>.
In <ref> we raise a number of points in favor of the reinforcement approach. We review related work in
<ref>, and we conclude and present a number of interesting
follow-up questions in <ref>.
§ HIGH-LEVEL OVERVIEW: REINFORCING NETWORKS
Let us first give an informal overview of our blackbox transformation
for reinforcing networks (for formal specification see <ref>), as well as its guarantees and preconditions.
Assumptions on the Input Network
We have two main assumptions on the network at hand: (1) We consider synchronous routing networks, and (2) each node in the network (alongside its outgoing links) is a fault-containment region, i.e., it fails independently from other nodes.
We do not make any assumptions on the network topology, but will provide specific
optimizations for practically relevant topologies (such as sparse, minor-free networks
or hypercubes) in <ref>.
Valid Reinforcement Simulation Guarantees
Our reinforcements create a number of copies of each node. We have each non-faulty copy of a node run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph. Moreover, the simulation fully preserves all guarantees of the schedule, including its timing, and introduces no big computational overhead.
This assumption is simple to meet in stateless networks, while it requires synchronization primitives in case of stateful network functions.
Unaffected Complexity and Cost Measures
Routing schemes usually revolve around objective functions such as load minimization, maximizing the throughput, minimizing the latency, etc., while aiming to minimize complexity related to, e.g., the running time for centralized algorithms, the number of rounds for distributed algorithms, the message size, etc. Moreover, there is the degree of uncertainty that can be sustained, e.g., whether the input to the algorithm is fully available at the beginning of the computation (offline computation) or revealed over time (online computation). Our reinforcements preserve all of these properties, as they operate in a blackbox fashion. For example, our machinery readily yields various fault-tolerant packet routing algorithms in the Synchronous Store-and-Forward model by Aiello et. al <cit.>. More specifically, from <cit.> we obtain a centralized deterministic online algorithm on unidirectional grids of constant dimension that achieves a competitive ratio which is polylogarithmic in the number of nodes of the input network w.r.t. throughput maximization. Using <cit.> instead, we get a centralized randomized offline algorithm on the unidirectional line with constant approximation ratio w.r.t. throughput maximization. In the case that deadlines need to be met the approximation ratio is, roughly, O(log^* n) <cit.>. As a final example, one can obtain from <cit.> various online distributed algorithms with sublinear competitive ratios w.r.t. throughput maximization.
Cost and Gains of the Reinforcement
The price of adding fault-tolerance is given by the increase in the network size, i.e., the number of nodes and edges of the reinforced network in comparison to the original one. Due to the assumed independence of node failures, it is straightforward to see that the (uniform) probability of sustainable node faults increases roughly like n^-1/(f+1) in return for (i) a linear-in-f increase in the number of nodes and (ii) an increase in the number of edges that is quadratic in f. We then proceed to improve the construction for grids and minor-free constant-degree graphs to reduce the increase in the number of edges to being roughly linear in f. Based on this information, one can then assess the effort in terms of these additional resources that is beneficial, as less reliable nodes in turn are cheaper to build, maintain, and operate. We also note that, due to the ability of the reinforced network to ensure ongoing unrestricted operability in the presence of some faulty nodes, faulty nodes can be replaced or repaired before communication is impaired or breaks down.
Preprocessing
Preprocessing is used, e.g., in computing routing tables in Oblivious Routing <cit.>.
The reinforcement simply uses the output of such a preprocessing stage in the same manner as the original algorithm. In other words, the preprocessing is done on the input network and its output determines the input routing scheme. In particular, the preprocessing may be randomized and does not need to be modified in any way.
Randomization
Randomized routing algorithms can be simulated as well, provided that all copies of a node have access to a shared source of randomness. We remark that, as our scheme locally duplicates the network topology, it is natural to preserve the physical realization of the network topology in the sense that all (non-faulty) copies of a node are placed in physical proximity. This implies that this constraint is much easier to satisfy than globally shared randomness.
§ PRELIMINARIES
We consider synchronous routing networks.
Formally, the network is modeled as a directed graph G=(V,E), where V is the set of n≜ |V| vertices, and E is the set of m≜ |E| edges (or links).
Each node maintains a state, based on which it decides in each round for each of its outgoing links which message to transmit.
We are not concerned with the inner workings of the node, i.e., how the state is updated;
rather, we assume that we are given a scheduling algorithm performing the task of updating this state and use it in our blackbox transformations.
In particular, we allow for online, distributed, and randomized algorithms.
Probability-p Byzantine Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol in arbitrary ways, including delaying, dropping, or forging messages, etc.
Probability-p Omission Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol by not sending a message over an outgoing link when they should. We note that it is sufficient for this fault model to be satisfied logically. That is, as long as a correct node can identify incorrect messages, it may simply drop them, resulting in the same behavior of the system at all correct nodes as if the message was never sent.
Simulations and Reinforcement
For a given network G=(V,E) and a scheduling algorithm A, we will seek to reinforce (G,A) by constructing G'=(V',E') and scheduling algorithm A' such that the original algorithm A is simulated by A' on G', where G' is subject to random node failures. We now formalize these notions. First, we require that there is a surjective mapping P:V'→ V; fix G' and P, and choose F'⊆ V' randomly as specified above.
Assume that in each round r∈, each v'∈ V'∖ F' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, a strict majority of the nodes v'∈ V' with P(v')=v computes in each round r∈ the state of v in A in this round. The simulation is strong, if not only for each v∈ V there is a strict majority doing so, but all v'∈ V'∖ F' compute the state of P(v') in each round.
Assume that in each round r∈, each v'∈ V' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, there is v'∈ V' with P(v')=v that computes in each round r∈ the state of v in A in this round. The simulation is strong, if each v'∈ V' computes the state of P(v') in each round.
A (strong) reinforcement of a graph G=(V,E) is a graph G'=(V',E'), a surjective mapping P V'→ V, and a way of determining a scheduling algorithm A' for G' out of scheduling algorithm A for G. The reinforcement is valid under the given fault model (p or p) if A' is a (strong) simulation of A a.a.s.
*Resources and Performance Measures.
We use the following performance measures.
* The probability p of independent node failures that can be sustained a.a.s.
* The ratio ν≜ |V'|/|V|, i.e., the relative increase in the number of nodes.
* The ratio η≜|E'|/|E|, i.e., the relative increase in the number of edges.
We now briefly discuss, from a practical point of view, why we do not explicitly consider further metrics that are of interest.
§.§ Other Performance Measures
* Latency:
As our reinforcements require (time-preserving) simulation relations, in terms of rounds, there is no increase in latency whatsoever.
However, we note that (i) we require all copies of a node to have access to the input (i.e., routing requests) of the simulated node and (ii) our simulations require to map received messages in G' to received messages of the simulated node in G.
Regarding (i), recall that it is beneficial to place all copies of a node in physical vicinity, implying that the induced additional latency is small.
Moreover, our constructions naturally lend themselves to support redundancy in computations as well, by having each copy of a node perform the tasks of its original;
in this case, (i) comes for free.
Concerning (ii), we remark that the respective operations are extremely simple;
implementing them directly in hardware is straightforward and will have limited impact on latency in most systems.
* Bandwidth/link capacities.
We consider the uniform setting in this work.
Taking into account how our simulations operate, one may use the ratio η as a proxy for this value.
* Energy consumption.
Regarding the energy consumption of links, the same applies as for bandwidth.
The energy nodes use for routing computations is the same as in the original system, except for the overhead induced by Point (ii) we discussed for latency.
Neglecting the latter, the energy overhead is in the range [min{ν,η},max{ν,η}].
* Hardware cost.
Again, neglecting the computational overhead of the simulation, the relative overhead lies in the range [min{ν,η},max{ν,η}]
In light of these considerations, we focus on p, ν, and η as key metrics for evaluating the performance of our reinforcement strategies.
§ STRONG REINFORCEMENT UNDER BYZ(P)
We now present and analyze valid reinforcements
under Byz(p)
for our faults model
on general graphs.
Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and set ℓ = 2f+1.
Reinforced Network G'
We set V'≜ V× [ℓ], where [ℓ]≜{1,…,ℓ}, and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Consider node v'∈ V'∖ F'. We want to maintain the invariant that in each round, each such node has a copy of the state of v=P(v') in A. To this end, v'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the message that has been sent to v' by at least f+1 of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step requires such a majority to exist; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, then A' strongly simulates A.
We show the claim by induction on the round number r∈, where we consider the initialization to anchor the induction at r=0. For the step from r to r+1, observe that because all v'∈ V'∖ F' have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. Accordingly, each v'∈ V'∖ F' receives
the message A would send over (w,v) ∈ E
from each w'∈ V'∖ F' with P(w')=w (via the link (w',v')). By the assumption of the lemma, we have at least ℓ-f=f+1 such nodes, implying that v' updates the local copy of the state of A as if it received the same messages as when executing A in round r+1. Thus, the induction step succeeds and the proof is complete.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
If p ∈ o(n^-1/(f+1)), the above construction is a valid strong reinforcement for the fault model p. If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f. If p ∈ o(n^-1/(f+1)), using ℓ=2f+1 and a union bound we see that the probability of this event is at least
1-n∑_j=f+1^2f+12f+1jp^j(1-p)^2f+1-j
≥ 1-n ∑_j=f+1^2f+12f+1jp^j
≥ 1-n 2f+1f+1p^f+1∑_j=0^f p^j
≥ 1-n (2e)^f·p^f+1/1-p= 1-o(1).
Here, the second to last step uses that ab≤ (ae/b)^b and the final step exploits that p∈ o(n^-1/(f+1)).
For the second claim, assume w.l.o.g. p≤ 1/3, as increasing p further certainly increases the probability of the system to fail. For any v∈ V, the probability that |{v_i∈ F'}|> f is independent of the same event for other nodes and larger than
2f+1f+1p^f+1(1-p)^f≥(3/2)^f p^f+1(1-p)^f≥ p^f+1,
since ab≥ (a/b)^b and 1-p≥ 2/3. Hence, if G contains Ω(n) nodes v with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the probability that there is such a node v for which |{v_i∈ F'}|> f is at least
1-(1-p^f+1)^Ω(n)⊆ 1-(1-ω(1/n))^Ω(n)= 1-o(1).
If there is such a node v, there are algorithms A and inputs so that A sends a message across some edge (v,w) in some round. If faulty nodes do not send messages in this round, the nodes w_i∈ V'∖ F' do not receive the correct message from more than f nodes v_i and the simulation fails. Hence, the reinforcement cannot be valid.
For constant p, one can determine suitable values of f∈Θ(log n) using Chernoff's bound. However, as our focus is on small (constant) overhead factors, we refrain from presenting the calculation here.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = ℓ^2 = 4f^2 + 4f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes.
However, η = 9, i.e., while the number of edges also increases only by a constant, it seems too large in systems where the limiting factor is the amount of links that can be afforded.
§ STRONG REINFORCEMENT UNDER OM(P)
The strong reinforcement from the previous section is, trivially, also a strong reinforcement under p. However, we can reduce the number of copies per node for the weaker fault model. Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and, this time, set ℓ = f+1.
Reinforced Network G'
We set V'≜ V× [ℓ] and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Each node[Nodes suffering omission failures still can simulate A correctly.] v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the (unique) message that has been sent to v' by some of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step assumes that some such neighbor sends a message and all w' with P(w') send the same such message; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, A' strongly simulates A.
Analogous to the one of Lemma <ref>, with the difference that faulty nodes may only omit sending messages and thus a single correct copy per node is sufficient.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
The above construction is a valid strong reinforcement for the fault model p if p ∈ o(n^-1/(f+1)). If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f = ℓ -1. For v∈ V,
{v_i | i∈ [ℓ]}∩ F'=ℓ = p^f+1.
By a union bound, A' thus simulates A with probability 1-o(1) if p∈ o(n^-1/(f+1)).
Conversely, if there are Ω(n) nodes with non-zero outdegree and p∈ω(n^-1/(f+1)), with probability 1-o(1) all copies of at least one such node v are faulty. If v sends a message under A, but all corresponding messages of copies of v are not sent, the simulation fails. This shows that in this case the reinforcement is not valid.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = ℓ^2 = f^2 + 2f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and quadrupling the number of edges.
§ MORE EFFICIENT REINFORCEMENT
In this section, we reduce the overhead in terms of edges at the expense of obtaining reinforcements that are not strong. We stress that the obtained trade-off between redundancy (ν and η) and the sustainable probability of faults p is asymptotically optimal: as we require to preserve arbitrary routing schemes in a blackbox fashion, we need sufficient redundancy on the link level to directly simulate communication. From this observation, both for p and p we can readily derive trivial lower bounds on redundancy that match the constructions below up to lower-order terms.
§.§ A Toy Example
Before we give the construction, we give some intuition on how we can reduce the number of required edges. Consider the following simple case. G is a single path of n vertices (v_1,…, v_n), and the schedule requires that in round i, a message is sent from v_i to v_i+1. We would like to use a “budget” of only n additional vertices and an additional (1+) m=(1+) (n-1) links, assuming the fault model p. One approach is to duplicate the path and extend the routing scheme accordingly. We already used our entire budget apart from m links! This reinforcement is valid as long as one of the paths succeeds in delivering the message all the way.
The probability that one of the paths “survives” is
1-(1-(1-p)^n)^2 ≤ 1-(1-e^-pn)^2 ≤ e^-2pn,
where we used that 1-x≤ e^-x for any x∈ℝ.
Hence, for any p = ω(1/n), the survival probability is o(1). In contrast, the strong reinforcement with ℓ=2 (i.e., f=1) given in <ref> sustains any p∈ o(1/√(n)) with probability 1-o(1); however, while it adds n nodes only, it requires 3m additional edges.
We need to add some additional edges to avoid that the likelihood of the message reaching its destination drops too quickly. To this end, we use the remaining ε m edges to “cross” between the two paths every h≜ 2/ε hops (assume h is an integer), cf. Figure <ref>.
This splits the path into segments of h nodes each. As long as, for each such segment, in one of its copies all nodes survive, the message is delivered. For a given segment, this occurs with probability 1-(1-(1-p)^h)^2≥ 1-(ph)^2. Overall, the message is thus delivered with probability at least (1-(ph)^2)^n/h≥ 1-nhp^2.
As for any constant ε, h is a constant, this means that the message is delivered a.a.s. granted that p∈ o(1/√(n))!
The reader is cautioned to not conclude from this example that random sampling of edges will be sufficient for our purposes in more involved graphs. Since we want to handle arbitrary routing schemes, we have no control over the number of utilized routing paths. As the latter is exponential in n, the probability that a fixed path is not “broken” by F would have to be exponentially small in n. Moreover, trying to leverage Lovász Local Lemma for a deterministic result runs into the problem that there is no (reasonable) bound on the number of routing paths that pass through a single node, i.e., the relevant random variables (i.e., whether a path “survives”) exhibit lots of dependencies.
§.§ Partitioning the Graph
To apply the above strategy to other graphs, we must take into account that there can be multiple intertwined routing paths. However, the key point in the above example was not that we had path segments, but rather that we partitioned the nodes into constant-size regions and added few edges inside these regions, while fully connecting the copies of nodes at the boundary of the regions.
In general, it is not possible to partition the nodes into constant-sized subsets such that only a very small fraction of the edges connects different subsets; any graph with good expansion is a counter-example. Fortunately, many network topologies used in practice are good candidates for our approach. In the following, we will discuss grid networks and minor free graphs, and show how to apply the above strategy in each of these families of graphs.
Grid Networks
We can generalize the above strategy to hypercubes of dimension d>1.
A q-ary d-dimensional hypercube has node set [q]^d and two nodes are adjacent if they agree on all but one index i∈ [d], for which |v_i-w_i|=1.
For any h,d∈, assume that h divides q∈ and set ε=1/h. Then the q-ary d-dimensional hypercube can be partitioned into (q/h)^d regions of h^d nodes such that at most an ε-fraction of the edges connects nodes from different regions.
We subdivide the node set into h-ary d-dimensional subcubes; for an example of the subdivision of the node set of a 6-ary 2-dimensional hypercube into 2-ary 2-dimensional subcubes see Figure <ref>. There are (q/h)^d such subcubes. The edges crossing the regions are those connecting the faces of adjacent subcubes. For each subcube, we attribute for each dimension one face to each subcube (the opposite face being accounted for by the adjacent subcube in that direction). Thus, we have at most dh^d-1 crossing edges per subcube. The total number of edges per subcube are these crossing edges plus the d(h-1)h^d-1 edges within the subcube. Overall, the fraction of crossedges is thus at most 1/(1+(h-1))=1/h, as claimed.
Note that the above result and proof extend to tori, which also include the “wrap-around” edges connecting the first and last nodes in any given dimension.
Minor free Graphs
Another general class of graphs that can be partitioned in a similar fashion are minor free bounded-degree graphs.
For a fixed graph H, H is a minor of G if H is isomorphic to a graph that can be obtained by zero or more
edge contractions on a subgraph of G. We say that a graph G is H-minor free if H is not a minor of G.
For any such graph, we can apply a corollary from <cit.>, which is based on <cit.>, to construct a suitable partition.
Let H be a fixed graph. There is a constant c(H) > 1 such that for every ∈ (0, 1] and
every H-minor free graph G = (V, E) with degree bounded by Δ a partition R_1,…,R_k⊆ V with the following properties can be found in time O(|V|^3/2):
* ∀ i : |R_i|≤c(H)Δ^2/^2,
* ∀ i the subgraph induced by R_i in G is connected.
* |{(u,v) | u ∈ R_i, v ∈ R_j, i≠ j}|≤· |V|.
Grids and tori of dimension d>2 are not minor-free.
We note that this construction is not satisfactory, as it involves large constants. It demonstrates that a large class of graphs is amenable to the suggested approach, but it is advisable to search for optimized constructions for more specialized graph families before applying the scheme.
§.§ Reinforcement
Equipped with a suitable partition of the original graph G=(V,E) into disjoint regions R_1,…,R_k⊆ V, we reinforce as follows.
As before, we set V'≜ V× [ℓ], denote v_i≜ (v,i), define P(v_i)≜ v, and set ℓ≜ f+1. However, the edge set of G' differs. For e=(v,w)∈ E,
E_e'≜{(v_i,w_i) | i∈ [ℓ]}
{(v_i,w_j) | i,j∈ [ℓ]}
and we set E'≜⋃_e∈ E E_e'.
Simulation under Om(p)
Consider v∈ V. We want to maintain the invariant that in each round, some v_i has a copy of the state of v in A. To this end, v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A and sets _v'=;
* sends on each link (v',w')∈ E' in each round
* message M, if P(v') would send M via (P(v'),P(w')) when executing A and _v'=,
* a special symbol if _v'=, but v would not send a message via (P(v'),P(w')) according to A, or
* no message if _v'=;
* if, in a given round, _v'= and v' receives for each neighbor w of P(v') a message from some w_j∈ V', it updates the local copy of the state of v in A as if P(v') received this message (interpreting as no message); and
* if this is not the case, v' sets _v'=.
We claim that as long as _v'= at v', v' has indeed a copy of the state of P(v') in the corresponding execution of A; therefore, it can send the right messages and update its state variables correctly.
Suppose that for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], some i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. As P(C)=V, it suffices to show that each v'∈ C successfully maintains a copy of the state of P(v') under A. However, we also need to make sure that all messages, not only the ones sent by nodes in c, are “correct,” in the sense that a message sent over edge (v',w')∈ E' in round r would be sent by A over (P(v'),P(w')) (where means no message is sent). Therefore, we will argue that the set of nodes T_r≜{v'∈ V' | _v'= in round r} knows the state of their counterpart P(v') under A up to and including round r∈. As nodes v' with _v'= do not send any messages, this invariant guarantees that all sent messages are correct in the above sense.
We now show by induction on the round number r∈ that (i) each v'∈ T_r knows the state of P(v') under A and (ii) C⊆ T_r. Due to initialization, this is correct initially, i.e., in “round 0;” we use this to anchor the induction at r=0, setting T_0≜ V'.
For the step from r to r+1, note that because all v'∈ T_r have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E' with P(w')=w. Recall that v'∈ T_r+1 if and only if v'∈ T_r and for each (w,P(v'))∈ E there is at least one w'∈ V' with P(w')=w from which v' receives a message. Since under p nodes in F' may only omit sending messages, it follows that v'∈ T_r+1 correctly updates the state variables of P(v'), just as P(v') would in round r+1 of A.
It remains to show that C⊆ T_r+1. Consider v_i∈ C and (w,v)∈ E. If v,w∈ R_k' for some k'∈ [k], then w_i∈ C by definition of C. Hence, by the induction hypothesis, w_i∈ T_r, and w_i will send the message w would send in round r+1 of A over (w,v)∈ E to v_i, using the edge (w_i,v_i)∈ E'. If this is not the case, then there is some j∈ [ℓ] such that w_j∈ C and we have that (w_j,v_i)∈ E'. Again, v_i will receive the message w would send in round r+1 of A from w_j. We conclude that v_i receives at least one copy of the message from w for each (w,v)∈ E, implying that v∈ T_r+1 as claimed. Thus, the induction step succeeds and the proof is complete.
Figure <ref> provides an example of a comparison between a network, a naive duplication of that network, and its reinforcement. The simulation process of sending a message in the same sample network is shown in Figure <ref>.
Resilience of the Reinforcement
We denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree and R∈ O(1), p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Accordingly, the probability that for a given k' the precondition of the lemma is violated is at most (Rp)^f+1. As k≤ n/r, taking a union bound over all k' yields that with probability at least 1-n/r· (Rp)^f+1, A' simulates A. Therefore, the reinforcement is valid if p ∈ o((n/r)^-1/(f+1)/R).
Now assume that r≤ R∈ O(1) and also that p∈ω(n^-1/(f+1))⊆ω((n/r)^-1/(f+1)/R). Thus, for each v∈ V, all v'∈ V' with P(v')=v simultaneously end up in F' with probability ω(1/n). Therefore, if Ω(n) nodes have non-zero outdegree, with a probability in 1-(1-ω(1/n))^Ω(n)=1-o(1) for at least one such node v all its copies end up in F'. In this case, the simulation fails if v sends a message under A, but all copies of v' suffer omission failures in the respective round.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(1+ε)f+ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and multiplying the number of edges by 2.4.
For hypercubes and tori, the asymptotic notation for p does not hide huge constants.
Lemma <ref> shows that h enters the threshold in Theorem <ref> as h^-d+1/2.
For the cases of d=2 and d=3, which are the most typical (for d>3 grids and tori suffer from large distortion when embedding them into 3-dimensional space), the threshold on p degrades by factors of 11.2 and 55.9, respectively.
§.§ Simulation under Byz(p)
The same strategy can be applied for the stronger fault model p, if we switch back to having ℓ=2f+1 copies and nodes accepting the majority message among all messages from copies of a neighbor in the original graph.
Consider node v∈ V. We want to maintain the invariant that in each round, a majority among the nodes v_i, i∈ [ℓ], has a copy of the state of v in A. For v'∈ V' and (w,P(v'))∈ E, set N_v'(w)≜{w'∈ V' | (w',v')∈ E'}. With this notation, v' behaves as follows.
[(1)]
* It initializes local copies of all state variables of v as in A.
* It sends in each round on each link (v',w')∈ E' the message v would send on (P(v'),P(w')) when executing A (if v' cannot compute this correctly, it may send an arbitrary message).
* It updates its state in round r as if it received, for each (w,P(v'))∈ E, the message the majority of nodes in N_v'(w) sent.
Suppose for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], f+1 indices i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. We claim that each v'∈ C successfully maintains a copy of the state of P(v') under A. We show this by induction on the round number r∈, anchored at r=0 due to initialization.
For the step from r to r+1, observe that because all v'∈ C have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. For each v'∈ C and each (w,P(v')), we distinguish two cases. If P(v') and w are in the same region, let i be such that v'=v_i. In this case, N_v'(w)={w_i} and, by definition of C, w_i∈ C. Thus, by the induction hypothesis, w_i sends the correct message in round r+1 over the link (w',v'). On the other hand, if P(v') and w are in different regions, N_v'(w)={w_i | i∈ [ℓ]}. By the definition of C and the induction hypothesis, the majority of these nodes (i.e., at least f+1 of them) sends the correct message w would send over (w,P(v')) in round r+1 when executing A. We conclude that v' correctly updates its state, completing the proof.
Resilience of the Reinforcement
As before, denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for the fault model p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Thus, analogous to the proof of Theorem <ref>, the probability that for a given k' the condition is violated is at most
∑_j=f+1^2f+12f+1j(Rp)^j(1-Rp)^2f+1-j
= (2e)^f(Rp)^f+1(1+o(1)).
By a union bound over the at most n/r regions, we conclude that the precondition p ∈ o((n/r)^-1/(f+1)/R) guarantees that the simulation succeeds a.a.s.
For the second statement, observe that for each node v∈ V of non-zero outdegree,
|{v_i}∩ F'|≥ f+1≥ p^f+1= ω(1/n).
Thus, a.a.s. there is such a node v. Let (v,w)∈ E and assume that A sends a message over (v,w) in some round. If v and w are in the same region, the faulty nodes sending an incorrect message will result in a majority of the 2f+1=|{w'∈ V' | P(w')=w}| copies of w attaining an incorrect state (of the simulation), i.e., the simulation fails. Similarly, if w is in a different region than v, for each copy of w the majority message received from N_w'(v) will be incorrect, resulting in an incorrect state.
Note that the probability bounds in Theorem <ref> are essentially tight in case R∈ O(1). A more careful analysis establishes similar results for r∈Θ(R)∩ω(1), by considering w.l.o.g. the case that all regions are connected and analyzing the probability that within a region, there is some path so that for at least f+1 copies of the path in G', some node on the path is faulty. However, as again we consider the case R∈ O(1) to be the most interesting one, we refrain from generalizing the analysis.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(2+2ε)f+4ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes and multiplying the number of edges by 4.2.
§ EMPIRICAL EVALUATION
We have shown that our approach from <ref> works particularly well
for graphs that admit a certain partitioning, such as
sparse graphs (e.g., minor-free graphs) or low-dimensional
hypercubes. To provide some empirical motivation for the relevance
of these examples, we note that the topologies collected
in the Rocketfuel <cit.> and Internet Topology Zoo <cit.> projects
are all sparse: almost a third (namely 32%) of the topologies even belong to the family of
cactus graphs, and roughly half of the graphs (49%) are outerplanar <cit.>.
To complement our analytical results and study the reinforcement cost
of our approach in realistic networks, we conducted simulations on
the around 250 networks from the Internet Topology Zoo.
While we have a fairly good understanding of the different network topologies
deployed in practice, unfortunately, little is known about the state-of-the-art protection mechanisms used by network operators today. Network operators are typically reluctant to share details about their infrastructure for security reasons, rendering a comparative evaluation difficult. That said, it seems relatively safe to assume that the most robust solutions rely on an one-by-one (“A/B”) replication strategy which allows to completely reroute traffic to a backup network; this baseline requires doubling resources and can hence be fairly costly.
In the following, we will report on our main insights.
Due to space constraints, we focus on the case of omission faults;
the results for Byzantine faults follow the same general trends.
Recall that we replace each node by f+1 of its copies, and each edge with endpoints in
different regions of the partition with (f+1)^2 copies; every other edge is replaced by f+1 copies.
Our goal is to do this partitioning such that it minimizes the edge overhead of the new network and
maximizes the probability of the network's resilience.
The fault probability of the network for given p, f and partitions with l_1, l_2, ..., l_k nodes is calculated as
1 - ∏_i=1^k [1-(1-(1-p)^l_i)^f+1].
In the following, as a case study, we fix a target network failure probability of at most 0.01.
That is, the reinforced network is guaranteed to operate correctly with a probability of 99%, and we aim to maximize the probability p with which nodes independently fail subject to this constraint.
For this fixed target resilience of the network, we determine the value of p matching it using the above formula.
We remark that the qualitative behavior for smaller probabilities of network failure is the same, where the more stringent requirement means that our scheme outperforms naive approaches for even smaller network sizes.
For the examined topologies, it turned out that no specialized tools were needed to find good partitionings.
We considered a Spectral Graph Partitioning tool <cit.> and Metis <cit.>,
a partitioning algorithm from a python library.
For small networks (less than 14 nodes), we further implemented a brute-force algorithm,
which provides an optimal baseline.
Figure <ref> shows the resulting edge overheads for the different partitioning algorithms
as a function of p and for f=3, at hand of a specific example.
For reference, we added the value of p for the original graph (f=0) to the plot, which has an overhead factor of 1 (no redundancy).
As to be expected, for each algorithm and the fixed value of f=3, as the number of components in partitionings increases, the edge overhead and p
increase as well.
The “Singleton partition” point for f=3 indicates the extreme case where the size of the components is equal to 1 and the approach becomes identical to strong reinforcement (see <ref>);
hence, it has an edge overhead of (f+1)^2=16.
The leftmost points of the f=3 curves correspond to the other extreme of “partitioning” the nodes into a single set, resulting in naive replication of the original graph, at an edge overhead of f+1=4.
We observed this general behavior for networks of all sizes under varying f, where the spectral partitioning consistently outperformed Metis, and both performed very close to the brute force algorithm on networks to which it was applicable.
We concluded that the spectral partitioning algorithm is sufficient to obtain results that are close to optimal for the considered graphs, most of which have fewer than 100 nodes, with only a handful of examples with size between 100 and 200.
Accordingly, in the following we confine the presentation to the results obtained using the spectral partitioning algorithm.
In Figure <ref>, we take a closer look on how the edge overhead
depends on f, at hand of a network of 33 nodes. Note that the partitionings do not depend on f, causing the 10 curves to have similar shape.
As f increases, the node overhead, edge overhead, and p for the reinforced networks increase.
We can see that it is advisable to use larger values of f only if the strong reinforcement approach for smaller f cannot push p to the desired value.
We also see that f=1 is sufficient to drive p up to more than 6%, improving by almost two orders of magnitude over the roughly 0.01/33≈ 0.03% the unmodified network can tolerate with probability 99%.
While increasing f further does increase resilience, the relative gains are much smaller, suggesting that f=1 is the most interesting case.
Following up on this, in Figure <ref> we plot p for all existing networks in the Topology Zoo using the spectral graph partitioning algorithm and f=1.
Specifically, for each network, we calculated the value of p on a set of reinforced networks with different node and edge overheads. Naturally, with increasing network size, the value of p that can be sustained at a given overhead becomes smaller. Note, however, that naive replication quickly loses ground as n becomes larger. In particular, already for about 20 nodes, an edge overhead of 3 with our approach is better than adding two redundant copies of the original network, resulting in more nodes, but the same number of edges. Beyond roughly 50 nodes, our approach outperforms two independent copies of the network using fewer edges, i.e., an edge overhead of 2.5.
To show more clearly when our approach outperforms naive network replication, Figure <ref> plots the relative gain in the probability p of node failure that can be sustained compared to the original network.
This plot is similar to the previous one. The y-axis now represents p divided by the value of p for the original graph. We now see that naive replication provides an almost constant improvement across the board. This is due to the fact that under this simple scheme, the reinforcement fails as soon as in each copy of the graph at least one node fails, as it is possible that a routing path in the original graph involves all nodes corresponding to failed copies.
Denote by p_k the probability of node failure that can be sustained with 99% reliability when simply using k copies of the original graph (in particular p_1≈ 0.01/n). For small k, the probability (1-p_k)^n that a single copy of the original graph is fault-free needs to be close to 1. Hence, we can approximate (1-p_k)^n≈ 1-p_k n. The probability that all copies contain a failing node is hence approximately (p_kn)^k. Thus, p_1 n ≈ 0.01≈ (p_k n)^k, yielding that
p_k/p_1=p_k n/p_1 n≈0.01^1/k/0.01=100^1-1/k.
In particular, we can expect ratios of roughly 10 for k=2 and 21.5 for k=3, respectively. The small discrepancy to the actual numbers is due to the approximation error, which would be smaller for higher target resilience.
As the plot clearly shows, our method achieves a relative improvement that increases with n, as predicted by Theorem <ref>.
In conclusion, we see that our approach promises substantial improvements over the naive replication strategy,
which is commonly employed in mission-critical networks
(e.g., using dual planes as in RFC 7855 <cit.>).
§ DISCUSSION
In the previous sections, we have established that constant-factor redundancy can significantly increase reliability of the communication network in a blackbox fashion. Our constructions in <ref> are close to optimal. Naturally, one might argue that the costs are still too high. However, apart from pointing out that the costs of using sufficiently reliable components may be even higher, we would like to raise a number of additional points in favor of the approach.
Node Redundancy
When building reliable large-scale systems, fault-tolerance needs to be considered on all system levels. Unless nodes are sufficiently reliable, node replication is mandatory, regardless of the communication network. In other words, the node redundancy required by our construction may not be an actual overhead to begin with. When taking this point of view, the salient question becomes whether the increase in links is acceptable. Here, the first observation is that any system employing node redundancy will need to handle the arising additional communication, incurring the respective burden on the communication network. Apart from still having to handle the additional traffic, however, the system designer now needs to make sure that the network is sufficiently reliable for the node redundancy to matter. Our simple schemes then provide a means to provide the necessary communication infrastructure without risking to introduce, e.g., a single point of failure during the design of the communication network; at the same time, the design process is simplified and modularized.
Dynamic Faults
Because of the introduced fault-tolerance, faulty components do not impede the system as a whole, so long as the simulation of the routing scheme can still be carried out. Hence, one may repair faulty nodes at runtime. If T is the time for detecting and fixing a fault, we can discretize time in units of T and denote by p_T the (assumed to be independent) probability that a node is faulty in a given time slot, which can be bounded by twice the probability to fail within T time. Then the failure probabilities we computed in our analysis directly translate to an upper bound on the expected fraction of time during which the system is not (fully) operational.
Adaptivity
The employed node- and link-level redundancy may be required for mission-critical applications only, or the system may run into capacity issues. In this case, we can exploit that the reinforced network has a very simple structure, making various adaptive strategies straightforward to implement.
* One might use a subnetwork only, deactivating the remaining nodes and links, such that a reinforced network for smaller f (or a copy of the original network, if f=0) remains. This saves energy.
* One might subdivide the network into several smaller reinforced networks, each of which can perform different tasks.
* One might leverage the redundant links to increase the overall bandwidth between (copies of) nodes, at the expense of reliability.
* The above operations can be applied locally; e.g., in a congested region of the network, the link redundancy could be used for additional bandwidth. Note that if only a small part of the network is congested, the overall system reliability will not deteriorate significantly.
Note that the above strategies can be refined and combined according to the profile of requirements of the system.
§ RELATED WORK
Robust routing is an essential feature of dependable
communication networks, and has been explored
intensively in the literature already.
*Resilient Routing on the Network Layer
In contrast to our approach,
existing resilient routing mechanisms on the network layer
are typically reactive.
They
can be categorized
according to whether they are supported in the
control plane, e.g.,
<cit.>,
or in the data plane, e.g., <cit.>,
see also the recent survey <cit.>.
These mechanisms are usually designed to cope with link failures.
Resilient routing algorithms in the control plane
typically rely on a global recomputation of paths
(either
centralized <cit.>,
distributed <cit.>
or both <cit.>),
or on techniques based on link reversal <cit.>, and can
hence re-establish policies relatively easily;
however, they come at the price of a relatively high restoration time
<cit.>.
Resilient routing algorithms in the dataplane can react to failures
significantly faster <cit.>; however,
due to the local nature of the failover, it is challenging to
maintain network policies or even a high degree of resilience <cit.>.
In this line of literature,
the network is usually given and the goal is to re-establish
routing paths quickly, ideally as long as the underlying physical
network is connected (known as perfect resilience <cit.>).
In contrast, in this paper we ask the question of how to proactively enhance the
network in order to tolerate failures, rather than reacting to them. In particular, we consider more general failures,
beyond link failures and benign faults.
We argue that such a re-enforced
network simplifies routing as it is not necessary to compute new paths.
The resulting problems are very different in nature, also in terms
of the required algorithmic techniques.
*Local Faults
In this paper, we consider more general failure models
than typically studied in the resilient routing literature above,
as our model is essentially a local fault model.
Byzantine faults were studied in <cit.> in the context of broadcast and consensus problems. Unlike its global classical counterpart, the f-local Byzantine adversary can control at most f neighbors of each vertex. This more restricted adversary gives rise to more scalable solutions, as the problems can be solved in networks of degree O(f); without this restriction, degrees need to be proportional to the total number of faults in the network.
We also limit our adversary in its selection of Byzantine nodes, by requiring that the faulty nodes are chosen independently at random. As illustrated, e.g., by Lemma <ref> and Theorem <ref>, there is a close connection between the two settings. Informally, we show that certain values of p correspond, asymptotically almost surely (a.a.s), to an f-local Byzantine adversary. However, we diverge from the approach in <cit.> in that we require a fully time-preserving simulation of a fault-free routing schedule, as opposed to solving the routing task in the reinforced network from scratch.
*Fault-Tolerant Logical Network Structures
Our work is reminiscent of literature on
the design fault-tolerant network structures.
In this area (see <cit.> for a survey), the goal is to compute a sub-network that has a predefined property, e.g., containing minimum spanning tree. More specifically, the sub-network should sustain adversarial omission faults without losing the property. Hence, the sub-network is usually augmented (with edges) from the input network in comparison to its corresponding non-fault-tolerant counterpart. Naturally, an additional goal is to compute a small such sub-network. In contrast, we design a network that is reinforced (or augmented) by additional edges and nodes so that a given routing scheme can be simulated while facing randomized Byzantine faults. As we ask for being able to “reproduce” an arbitrary routing scheme (in the sense of a simulation relation), we cannot rely on a sub-network.
The literature also considered random fault models.
In the network reliability problem, the goal is to compute the probability that the (connected) input network becomes disconnected under random independent edge failures. The reliability of a network is the probability that the network remains connected after this random process.
Karger <cit.> gave a fully polynomial randomized approximation scheme for the network reliability problem.
Chechik et. al <cit.> studied a variant of the task, in which the goal is to compute a sparse sub-network that approximates the reliability of the input network.
We, on the other hand, construct a reinforced network that increases the reliability of the input network;
note also that our requirements are much stricter than merely preserving connectivity.
*Self-healing systems
In the context of self-healing routing (e.g., Castañeda et al. <cit.>), researchers have studied a model where an adversary removes nodes in an online fashion, one node in each time step (at most n such steps). In turn, the distributed algorithm adds links and sends at most O(Δ) additional messages to overcome the inflicted omission fault.
Ideally, the algorithm is “compact”: each node's storage is limited to o(n) bits.
A nice property of the algorithm in <cit.> is that the degrees are increased by at most 3. For our purposes, an issue is that the diameter is increased by a logarithmic factor of the maximum initial degree, and hence the same holds for the latency of the routing scheme. Instead, we design a network that is “oblivious” to faults in the sense that the network is “ready” for independent random faults up to a certain probability, without the need to reroute messages or any other reconfiguration. Moreover, our reinforcements tolerate Byzantine faults and work for arbitrary routing schemes. We remark that compact self-healing routing schemes also deal with the update time of the local data structures following the deletion of a node; no such update is required in our approach.
*Robust Peer-to-Peer Systems
Peer-to-peer systems are often particularly dynamic and the development
of robust algorithms hence crucial.
Kuhn et. al <cit.> study faults in peer-to-peer systems in which an adversary adds and removes nodes from the network within a short period of time (this process is also called churn). In this setting, the goal is to maintain functionality of the network in spite of this adversarial process. Kuhn et al. <cit.> considered hypercube and pancake topologies, with a powerful adversary that cannot be “fooled” by randomness. However, it is limited to at most O(Δ) nodes, where Δ is the (maximum) node degree, which it can add or remove within any constant amount of time. The main idea in <cit.> is to maintain a balanced partition of the nodes, where each part plays the role of a supernode in the network topology. This is done by rebalancing the nodes after several adversarial acts, and increasing the dimensionality of the hypercube in case the parts become too big.
Hypercubes were also of particular interest in this paper. We employ two partitioning techniques to make sure that: (1) the size of each part is constant and (2) the number of links in the cut between the parts is at most · n, where n is the number of nodes. These partitioning techniques help us dial down the overheads within each part, and avoid a failure of each part due to its small size. However, we note that our motivation for considering these topologies is that they are used as communication topologies, for which we can provide good reinforcements, rather than choosing them to exploit their structure for constructing efficient and/or reliable routing schemes (which is of course one, but not the only reason for them being used in practice).
§ CONCLUSION
In this paper, we proposed simple replication strategies for improving network reliability. Despite being simple and general, both in terms of their application and analysis, our strategies can substantially reduce the required reliability on the component level to maintain network functionality compared the baseline, without losing messages or increasing latencies.
The presented transformations allow us to directly reuse non-fault-tolerant routing schemes as a blackbox,
and hence avoid the need to refactor working solutions.
We consider this property highly useful in general and essential in real-time systems.
Hence, being prepared for non-benign faults can be simple, affordable, and practical, and therefore enables building larger reliable networks. Interestingly, while our basic schemes may hardly surprise, we are not aware of any work systematically exploring and analyzing this perspective.
We understand our work as a first step and believe that it opens
several interesting avenues for future research.
For example:
* Which network topologies allow for good partitions as utilized in <ref>? Small constants here result in highly efficient reinforcement schemes, which are key to practical solutions.
* Is it possible to guarantee strong simulations at smaller overheads?
* Can constructions akin to the one given in <ref> be applied to a larger class of graphs?
On the practical side, while
our simulations indicate that our approach
can be significantly more efficient than a naive one-by-one replication strategy
to provision
dependable ISP networks,
it will be interesting to extend these empirical studies and also consider
practical aspects such as the incremental deployment
in specific networks.
Acknowledgments.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 716562) and from the Vienna Science and Technology Fund (WWTF), under grant number ICT19-045 (project WHATIF).
This research was supported by the Israel Science Foundation under Grant 867/19.
spmpsci
[
< g r a p h i c s >
]Christoph Lenzen
received a diploma degree in mathematics from the University of Bonn in 2007 and a
Ph. D. degree from ETH Zurich in 2011. After postdoc positions at the Hebrew University of Jerusalem,
the Weizmann Institute of Science, and MIT, he became group leader at MPI for Informatics in 2014.
In 2021 he became faculty member at CISPA.
He received the best paper award at PODC 2009, the ETH medal for his dissertation, and in 2017 an ERC starting grant.
[
< g r a p h i c s >
]Moti Medina
is a faculty member at the Engineering Faculty at Bar-Ilan University since 2021. Previously, he was a faculty member at the Ben-Gurion University of the Negev and a post-doc
researcher in MPI for Informatics and in the Algorithms and Complexity group at
LIAFA (Paris 7). He graduated his Ph. D., M. Sc., and B. Sc. studies at the
School of Electrical Engineering at Tel-Aviv University, in 2014, 2009, and 2007
respectively. Moti is also a co-author of a text-book on logic design
“Digital Logic Design: A Rigorous Approach”, Cambridge Univ. Press, Oct.
2012.
[
< g r a p h i c s >
]Mehrdad Saberi
is an undergraduate student in Computer Engineering at Sharif University of Technology, Tehran, Iran. He achieved a silver medal in International Olympiad in Informatics (2018, Japan) during high school and is currently interested in studying and doing research in Theoretical Computer Science.
[
< g r a p h i c s >
]Stefan Schmid
is a Professor at TU Berlin, Germany.
He received his MSc (2004) and PhD
(2008) from ETH Zurich, Switzerland. Subsequently, Stefan Schmid
worked as postdoc at TU Munich and the University of Paderborn (2009).
From 2009 to 2015, he was a senior research scientist at the Telekom Innovations Laboratories (T-Labs) in Berlin, Germany, from 2015 to 2018 an Associate
Professor at Aalborg University, Denmark, and from 2018 to 2021 a Professor
at the University of Vienna, Austria.
His research interests revolve around algorithmic problems of networked and distributed systems,
currently with a focus on self-adjusting networks
(related to his ERC project AdjustNet) and resilient networks (related to his WWTF project
WhatIf).
|
http://arxiv.org/abs/2307.05841v1 | 20230711232726 | Influential Simplices Mining via Simplicial Convolutional Network | [
"Yujie Zeng",
"Yiming Huang",
"Qiang Wu",
"Linyuan Lü"
] | cs.SI | [
"cs.SI",
"cs.AI",
"physics.soc-ph"
] |
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Influential Simplices Mining via Simplicial Convolutional Network
Yujie Zeng,
Yiming Huang,
Qiang Wu,
Linyuan Lü,
Y. Zeng, Y. Huang, Q. Wu, and L. Lü are with the Institute of Fundamental and Frontier Studies, University of Electronic Science and Technology of China, Chengdu, PR China. E-mail: {yujie_zeng, yiming_huang, qiang.wu, linyuan.lv}@uestc.edu.cn
Y. Zeng, Y. Huang, and L. Lü are with the Yangtze Delta Region Institute (Huzhou), University of Electronic Science and Technology of China, Huzhou, PR China.
L. Lü is with the School of Cyber Science and Technology, University of Science and Technology of China, Hefei, PR China.
Y. Zeng and Y. Huang contributed equally to this work. Corresponding author: Linyuan Lü.
July 11, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Simplicial complexes have recently been in the limelight of higher-order network analysis, where a minority of simplices play crucial roles in structures and functions due to network heterogeneity.
We find a significant inconsistency between identifying influential nodes and simplices.
Therefore, it remains elusive how to characterize simplices’ influence and identify influential simplices, despite the relative maturity of research on influential nodes (0-simplices) identification.
Meanwhile, graph neural networks (GNNs) are potent tools that can exploit network topology and node features simultaneously, but they struggle to tackle higher-order tasks.
In this paper, we propose a higher-order graph learning model, named influential simplices mining neural network (ISMnet), to identify vital h-simplices in simplicial complexes.
It can tackle higher-order tasks by leveraging novel higher-order presentations: hierarchical bipartite graphs and higher-order hierarchical (HoH) Laplacians, where targeted simplices are grouped into a hub set and can interact with other simplices.
Furthermore, ISMnet employs learnable graph convolutional operators in each HoH Laplacian domain to capture interactions among simplices, and it can identify influential simplices of arbitrary order by changing the hub set.
Empirical results demonstrate that ISMnet significantly outperforms existing methods in ranking 0-simplices (nodes) and 2-simplices.
In general, this novel framework excels in identifying influential simplices and promises to serve as a potent tool in higher-order network analysis.
Influential Simplices Mining,
Influence Maximization,
Higher-order Network,
Simplicial Complexes,
Graph Convolutional Networks
§ INTRODUCTION
GRAPHS have been widely studied in network science for providing a flexible, intuitive, and powerful way to represent and analyze complex systems, leading to fruitful applications across diverse domains, including sociology, biology, and technology.
Nevertheless, pairwise graph structures inherently fail to model higher-order interactions that are prevalent in empirical networks <cit.>. Moreover, the functionality of many networked systems is influenced or determined by higher-order interactions as shown in many recent studies, which transpire not merely between pairs of nodes but also involve larger assemblies of nodes simultaneously <cit.>.
Examples of such higher-order group dynamics include recommendations from multiple friends in social networks <cit.>, group hunting behaviors in ecological competitive networks <cit.>, and coordinated neuronal activity during information transmission in brain networks <cit.>.
Consequently, the study of higher-order group dynamics in complex systems has gained increasing attention in recent years <cit.>.
Simplicial complexes (SCs), which are derived from algebraic topology, serve as a potent tool for studying the higher-order interactions <cit.>.
Identifying vital simplices is conducive to influencing and controlling the dynamic behavior of complex systems.
Specifically, as infection of vital simplices can lead to more individuals being infected in epidemic contagion <cit.>, removal of vital simplices can cause the entire network to disintegrate more quickly in network dismantling <cit.>.
Generally, by discovering vital simplices, researchers can identify the central structures that are critical for the overall network's functioning, robustness, and stability.
However, it remains elusive how to characterize simplices influence and identify vital simplices of order h (termed h-simplices), despite the relative maturity of research on vital nodes (0-simplices) identification.
Generalized degree <cit.> is used to measure the importance of simplices, which is a direct extension of degree in pairwise networks.
In addition to this, few metrics can be used to directly characterize the influence of simplices.
An indirect and trivial way to quantify simplices' influence involves using average node-level metrics, such as defining the degree of a triangle as the average of its three nodes' degrees.
Specifically, there are generally three categories of methods mainly employed to identify vital nodes: neighborhood-based centralities, path-based centralities, and iterative refinement centralities <cit.>.
Notably, we find significant differences between identifying vital simplices and individual nodes.
The influence exerted by a collective entity can at times surpass the cumulative impact of its individual constituents, while at other times it can be comparatively weaker.
This phenomenon is exemplified by familiar adages like "Two heads are better than one" and "Many hands make light work," which highlight the potential synergy and enhanced performance achieved through collective efforts. Conversely, the proverbial warning "Too many cooks spoil the broth" draws attention to situations in which collective action may be weaker or less effective compared to individual actions.
This phenomenon can be observed in various empirical contexts. For example, specific drug combinations can achieve optimal therapeutic effects; the cooperation of renowned actors can produce unsuccessful movies, while lesser-known actors' co-appearances might create popular films; different basketball player combinations have a significant impact on net points scored; a unanimous consent agreement may not be reached even if the key members are persuaded.
It is worth noting that the notion of influential simplices or nodes can vary depending on the specific context, such as the need to identify simplices that can best protect the population in an epidemic or those whose damage would result in the most extensive cascading failures <cit.>.
Thus, finding a universal index that accurately quantifies the importance of simplices across all scenarios is not feasible, and there is a need for more flexible and efficient approaches that can adapt to diverse contexts based on the given objectives.
Graph Neural Networks (GNNs) offer a promising solution by providing adaptability and the ability to learn task-specific embeddings.
GNNs have received fruitful achievements in recent years across various graph learning tasks, including vital node identification and influence maximization <cit.>.
The strength of GNNs stems from their ability to capture both node features and graph topology information simultaneously.
Zhao et al. <cit.> initially propose the infGCN model, which addresses influence maximization problems as classification tasks and employs GCN <cit.> to resolve this task.
Kumar et al. <cit.> introduce SGNN for vital node identification tasks by incorporating node features discovered by struc2vec into GNNs.
GNNs can tailor the learning of different embeddings according to the specific objectives at hand. This adaptability enables GNNs to effectively address diverse contexts.
In general, GNNs' ability to capture both node features and graph topology information, coupled with their flexibility in learning task-specific embeddings, make them valuable tools for identifying influential simplices or nodes in various scenarios.
Nonetheless, the intrinsic limitations of GNNs hinder their applicability to vital simplices identification problems.
Specifically, traditional GNNs only consider pairwise node attributes, neglecting the higher-order interactions within the network, which renders them inadequate for learning and modeling the complex mechanisms underlying the network.
Moreover, GNNs are incapable of directly learning embeddings for simplices, and acquiring such embeddings through readout operations often results in information loss.
Additionally, GNNs need K message-passing operations for a node to receive information from nodes located K hops away due to the coupling between input and computational graphs <cit.>. Apart from the ensuing computational cost, deep GNNs often exhibit issues such as over-smoothing and over-squeezing of node representations.
Consequently, GNNs are prone to learn the local properties while missing long-range information.
In this paper, we present novel higher-order presentations: hierarchical bipartite graphs and higher-order hierarchical (HoH) Laplacians, where targeted simplices are grouped into a hub layer and can interact with other simplices.
Subsequently, we propose a higher-order convolutional network, named ISMnet, to identify vital h-simplices, which incorporates real simplex influence scores derived from samples or propagation simulations.
ISMnet employs learnable graph convolutional operators within each HoH Laplacian domain to capture higher-order interactions among simplices.
By modifying the hub layer, ISMnet can identify influential simplices of varying orders, enabling adaptability to different objectives and direct learning of task-specific embeddings for hub simplices.
Empirical results demonstrate that ISMnet significantly outperforms existing methods in ranking both 0-simplices (nodes) and 2-simplices, highlighting its effectiveness in identifying influential simplices in SCs.
Our main contributions are four-fold as follows:
* An inconsistency is detected between mining influential nodes and mining influential simplices, highlighting the need for specialized methods for the latter task.
* The task of identifying influential simplices in the simplicial complex is formulated as a graph representation learning problem for the first time.
* We introduce an influential simplices mining neural network (ISMnet) model, which can adaptively adapt to different objectives and can capture both long-range interactions and higher-order effects.
* Extensive experiments on synthetic and empirical networks reveal the proposed method's commendable performance in influential simplices mining issues.
In general, this novel framework excels in identifying influential simplices and promises to serve as a potent tool in higher-order network analysis.
§ RELATED WORK
§.§ Influential Simplices Mining via Classical Methods
Influential nodes, also referred to as vital nodes, are special nodes within a network that exert a greater influence on the structure and function compared to other nodes.
They play a crucial role in various network tasks such as information spreading <cit.>, synchronization, and control <cit.>.
Notably, a small fraction of vital nodes can influence a large number of nodes within the entire network <cit.>.
To identify these vital nodes, different methods have been developed, including structural centralities, iterative refinement centralities, and deep-learning-based approaches.
Structural centrality measures the importance of nodes in terms of a particular topology in a network.
Degree centrality, for instance, calculates the number of neighbors of node v_i, which is widely used due to its straightforward interpretation and computational efficiency.
Neighbor degree, an extension of degree centrality, quantifies the average degree of a node's neighbors.
Other metrics such as H-index <cit.> and coreness centrality <cit.> also consider the degree of a node's neighbors when assessing its importance. Closeness centrality <cit.> and betweenness centrality <cit.> are computed by evaluating the shortest paths between nodes.
Iterative refinement centralities take into account both the topological structure of nodes and the influence of their neighbors.
Eigenvector centrality <cit.> assigns centrality scores to nodes based on the sum of the centralities of their connected nodes.
PageRank <cit.>, a well-known variant of eigenvector centrality, measures the importance of web pages by considering the quantity and quality of the pages that link to them.
A brief introduction to deep-learning-based methods is deferred to the next section.
Additionally, some algorithms cannot be classified as above, other algorithms also include the entanglement models <cit.>, and the random walk-based gravity model <cit.>.
A more detailed overview can be found in the following reference <cit.>.
In Table <ref>, we list some widely used centrality metrics along with their formulas and explanations, and some of them serve as baselines for comparison with our model.
However, it is worth noting that these methods solely focus on assessing the influence of individual nodes in isolation and typically overlook the group interactions between node sets.
Furthermore, we find that influential nodes and influential simplices do not exhibit a one-to-one correspondence, highlighting the need for specialized algorithms for influential simplices mining tasks.
Currently, there are few metrics specifically designed for identifying influential simplices.
One such metric is the generalized degree k_d,m(α) <cit.>, a direct extension of the degree concept employed in pairwise networks, which measures the number of d-dimensional simplices incident to the m-simplex α.
§.§ Graph Neural Networks and Vital Node Mining
Deep learning methods are used in many tasks due to their outperforming expressive.
Among them, graph neural networks (GNNs) can exploit node features and the graph topology simultaneously, thereby triggering a wide-spreading research interest and endeavor in various graph learning tasks such as vital node mining <cit.>.
Spatial-based and spectral-based GNNs are the two primary categories of GNNs.
Spectral GNNs extend the application of convolutional neural network (CNN) to the graph domain, which is based on the graph Fourier transform <cit.> and employs the graph Laplacian eigenbasis as an analogy to the Fourier transform.
SCNN <cit.> replaces the convolution kernel with a learnable diagonal matrix in the spectral domain.
ChebNet <cit.> replaces the convolution kernel in the spectral domain with Chebyshev polynomials.
GCN <cit.> further simplifies ChebNet by considering only the first-order Chebyshev inequality and only one parameter per convolution kernel.
Recently, GNNs have achieved some remarkable success in vital node mining.
Zhao et al. <cit.> transform the influential nodes identification task into a classification problem.
They first solve vital node mining by using GNN and proposed the InfGCN algorithm, in which the BFS algorithm is leveraged to sample the neighbor network for each node. The degree centralities, betweenness centralities, and clustering coefficients of the nodes are used to construct the input of GCN. Then, the output of the GCN is used as the input of a fully connected neural network to predict the label of each node.
Besides, Yu et al. <cit.> transform the node identification problem into a regression problem inspired by GCN. Specifically, it calculates the adjacency matrix of the ℓ-hop subgraph as the original feature and trains them by CNN with the label, which is the ranking calculated by the SIR epidemic spreading range. The RCNN algorithm learns structural information on a train network and uses the optimal model to test other networks. This algorithm has strong universality.
Kumar et al. <cit.> also consider the influence maximization question as a regression question.
They propose SGNN for finding the most influential nodes of the whole network by using struc2vec.
The whole algorithm learns structural information on a train network by GNN and uses the optimal model to test other networks.
ILGR <cit.> is designed for the fast identification of critical nodes and links in large complex networks, which employs the GraphSAGE model and attention mechanism to learn node embeddings.
Besides, there are also some deep-learning methods in vital node mining.
Ou et al. <cit.> propose the M-RCNN algorithm by using the multi-level structural attributes based on the RCNN.
Same as RCNN, it gets node labels by the SIR epidemic spreading model.
The algorithm uses micro, community, and macro information as origin features, and uses CNN to train traditional BA networks to get the optimal model. Then the optimal training model is used to test other networks.
However, empirical methods neglect the multi-order message of the graph and just take neighbors' features into consideration.
M-RCNN just calculates the centrality metrics for different order messages but overlooks the message passing by higher-order structures.
Moreover, Synthetic graphs such as RCNN and M-RCNN are employed to train the model. However, there may be considerable topological differences between synthetic and empirical networks. It is more appropriate to use a subset of nodes to train and validate the model.
§ PRELIMINARIES
§.§ Problem Formulation
Graphs (or Networks) provide a powerful framework for elucidating complex systems. In mathematics, they can be presented as a tuple 𝒢=(𝒱, ℰ, 𝒮), where 𝒱={v_1,v_2,⋯,v_n} denotes the node-set,
ℰ⊂𝒱×𝒱 includes the edges between nodes, and 𝒮={s_1,s_2, ⋯, s_n} denotes the influence scores for nodes.
Due to the high cost of accurately surveying the influence of each user, we can use a sampling method to survey the influence of a small number of users.
That is, we can obtain an observation network 𝒢'=(𝒱, ℰ, 𝒮'), where 𝒮' ⊆𝒮.
In extensive empirical scenarios, relative rankings between different players are prone to be of more concern compared with influence scores.
In mathematics, a ranking matrix ℛ can be constructed according to influence scores that
ℛ_i,j= {[ 1, s_i>s_j; -1, s_i<sj; 0, otherwise ].
Taking 𝒢' as the input, the object of identifying influential nodes is to predict the unknown influence scores 𝒮-𝒮' and predict the ranking matrix ℛ as accurately as possible.
Simplicial Complexes (SCs) serve as robust mathematical constructs employed to represent topological spaces <cit.>, facilitating the analysis of their inherent properties and structures through algebraic topology.
An h-simplex is constituted by h+1 fully interconnected nodes, encompassing entities such as nodes (0-simplex), edges (1-simplex), “full” triangles (2-simplex), and so forth.
Each simplex exists as an entity that maintains functional integrity in SCs, rendering the identification of vital individual nodes insufficient to affect the function and state of the SCs <cit.>.
Therefore, this study focuses on the simplex ranking within simplicial complexes.
Influential simplices mining is devoted to ranking the set of h-simplices 𝒦_h based on topological information and a sample of simplex influence scores 𝒮_h'.
Specifically, an h-simplex ranking matrix ℛ_h can be constructed according to simplex influence scores, with the objective of predicting the ranking matrix ℛ_h as accurately as possible.
In the task of identifying influential h-simplices, the subscript of R_h is omitted to avoid overcrowded notation, provided that it does not engender any ambiguity.
The merit of formulating the vital simplex identification task as a ranking issue is that, upon acquiring a relative influence ranking, the top N simplices can be directly selected based on demand.
§.§ Simplicial Complexes
We now proceed to introduce simplicial complexes (SCs) formally, followed by some potent tools used in SCs.
Mathematically, a simplicial complex 𝒦 is a finite collection of node subsets σ=[v_0,⋯,v_h] that is closed under taking subsets, and such a node subset σ is referred to as h-simplex with dimension (cardinality) h+1.
Note that only full triangles belong to 2-simplices; three interconnected nodes simply form a triangle if there are no group interactions between them.
For convenience, we utilize the notion 𝒦_h to denote the collection of h-simplices within 𝒦, for instance, 𝒦_0=𝒱 and 𝒦_1=ℰ, and their dimensions are signified as |𝒦_h|=n_h.
Simplicial complexes offer an invaluable framework for characterizing interactions encompassing more than two nodes, transcending the limitations imposed by pairwise structures.
Hasse diagram is one of the most common mathematical representations of simplicial complexes, where each node in p-layer presents a p-simplex.
In the Hasse diagram, the connection relationship is defined by the boundary incidence relation, and there exists an edge connecting two vertices σ_1 and σ_2, iff σ_1 ≺σ_2. See Fig. <ref> for a graphical representation, and it can be directly found by definition that the Hasse diagram is a directed acyclic graph (DAG).
The Hasse diagram is very expressive, and the message-passing-based simplicial network <cit.> is built precisely on the boundary incidence relationships shown in the Hasse diagram.
§.§ Spectral Graph Neural Networks
Spectral-based GNNs employ spectral graph convolutions within the domain of the Laplacian spectrum. Recent studies suggest that many prevalent models utilize polynomial spectral filters to establish graph convolutions. A basic graph spectral filtering operation is formulated as
y
= U diag[h(λ_1),⋯,h(λ_n) ] U^⊤𝐱
= U h(Λ) U^⊤𝐱
≈∑_k^K ω_k L^k 𝐱,
where L=I-D^-1/2AD^-1/2 represents the normalized Laplacian matrix of graph 𝒢, 𝐱 denotes the graph signal, and h(Λ) is the graph filter with weights ω_k.
There are fruitful studies on designing appropriate graph filters. ChebNet <cit.> is a seminal attempt that employs the Chebyshev polynomial to approximate the graph filter as follows:
y ≈∑_k=0^K ω_k T_k(L)𝐱.
Here, the Chebyshev polynomial coefficient T_k(L) is defined by the recursive equation T_k(x)=2xT_k-1(x)-T_k-2(x), with T_0(x)=1 and T_1(x)=x.
As K increases, ChebNet is able to approximate arbitrary spectral filters.
GCN <cit.> leverages truncated Chebyshev polynomials with the first two terms (i.e., K=1), resulting in a fixed low-pass filter.
Existing research shows that ChebNet is theoretically more expressive than GCN <cit.>.
However, empirical experiments reveal that ChebNet is inferior to GCN for semi-supervised node classification tasks, which is really counter-intuitive. It has been found that ChebNet’s inferior performance primarily stems from illegal coefficients learned by ChebNet approximating analytic filter functions, subsequently leading to over-fitting <cit.>.
In recent years, a plethora of studies have drawn inspiration from ChebNet, leveraging Monomial (e,g., GPRGNN <cit.>), Bernstein (e.g., BernNet <cit.>), or Jacobi (e.g., JacobiConv <cit.>) bases to approximate graph filters.
§.§ Information Diffusion Models
This section introduces two information diffusion models employed in this study: the susceptible-infected-recovered (SIR) model and its higher-order version HSIR model.
The susceptible-infected-recovered (SIR) model is a classical epidemiological model describing real-world disease epidemics <cit.>, and it has also been widely employed to analyze multiple spreading processes such as rumors, information, and biological diseases.
The entire population is partitioned into three discrete states: susceptible (S), infected (I), and recovered (R), and each individual belongs to one of these states at any time.
The infected nodes spread the disease or information to their susceptible neighbors with a certain probability β and can recover with probability γ. The model assumes that once recovered, the individual is immune to the disease and cannot be reinfected again. Such processes can be represented as
S+I β→ 2 I, I γ→ R.
Notably, when γ=0, the SIR model degrades to the SI model.
The higher-order SIR (HSIR) model accounts for the fact that diffusion processes occur simultaneously through links or group interactions with different rates <cit.>.
In this model, a set of control parameters β_1, β_2, β_3, ⋯, β_D govern the HSIR dynamics, with each element representing the probability per unit time for a susceptible node u participating in a simplex σ of dimension D to contract the infection from each one of the subfaces comprising σ, provided that all other nodes in σ are infectious.
In practice, β_1 is equal to the standard infection probability β that a susceptible node u contracts infection from an infected neighbor v via the 1-simplex [u,v].
Similarly, the second parameter β_2 corresponds to the probability per unit time that node u receives the infection from a 2-simplex (“full” triangle) [u,v,w], wherein both v and w are infectious.
This pattern continues for higher dimensions.
In simulations, all nodes are initially set to the susceptible state except a few initial spreader nodes that are in the infected state. The diffusion process continues until the number of newly infected individuals reaches zero.
To prevent an excessive or insufficient number of nodes from becoming infected, thereby diminishing the accuracy of evaluating a node's infective capacity, the infection probability β should be set near the infection probability threshold β_th <cit.>. Mathematically, β_th is defined as:
β_th=⟨ k ⟩/⟨ k^2 ⟩-⟨ k ⟩γ.
Here, ⟨ k ⟩ and ⟨ k^2 ⟩ denote the average degree and the average square degree, respectively.
Table <ref> summarizes the notations and definitions throughout this paper for clarity.
§ PROPOSED FRAMEWORK
In this section, we formally propose the framework of the influential simplices mining neural network (ISMnet), a kind of simplicial convolutional network, to identify influential h-simplices in simplicial complexes.
§.§ Higher-order Representations
Hasse diagrams offer essential insights into the analysis of simplicial complexes; however, their construction can be computationally costly, especially in dense graphs where the total number of simplices increases exponentially with the number of nodes.
Consequently, in this study, computing embeddings for all simplices can be both inefficient and unnecessary if we are devoted to identifying vital h-simplices.
To deal with this problem, we introduce a novel higher-order representation called hierarchical bipartite graphs.
Moreover, based on higher-order random walk dynamics within these bipartite graphs, we develop higher-order hierarchical adjacency and Laplacian matrices.
In the proposed higher-order hierarchical bipartite graph model, we organize all p-simplices into one layer, referred to as a p-layer. Assuming that h-simplices are the targets (i.e., our focus is on identifying vital h-simplices), we thus designate the h-layer as the hub layer, while the remaining layers are considered fringe layers. Interaction is permitted exclusively between the hub layer and fringe layers, which implies no interaction is allowed among the fringe layers themselves. Consequently, each pair consisting of a hub h-layer and a fringe f-layer forms a bipartite graph 𝒢_h,f.
Drawing inspiration from incidence matrices in pairwise networks, we can analogously define a higher-order incidence matrix ℬ_h,f to represent the incidence relationship within the bipartite graph 𝒢_h,f.
Specifically, rows in ℬ_h,f correspond to h-simplices, columns represent f-simplices, and the matrix entries indicate their affiliation. Formally, it is defined as:
ℬ_h,f(α_i,τ_j) = {[ 1, α_i ⊂τ_j or α_i ⊃τ_j; 0, otherwise ].
The dynamics of random walks can be utilized to understand how information flows are locally trapped and to uncover the intrinsic properties of both pairwise networks and higher-order networks <cit.>.
Random walk dynamics on the proposed hierarchical bipartite graphs consist of two sub-steps: walking away and walking back.
Walking away refers to the walk from the hub simplices to their corresponding fringe simplices, while the `walking back' process follows the reverse direction.
Let p_σ(t) represent the probability of a simplex σ being occupied by a random walker at step t.
In the waling away process, the probability of moving from the hub simplex σ (∈𝒦_h) to the fringe simplex τ (∈𝒦_f) is equal to ℬ_h,f(σ,τ)/δ_h,f(σ). Hence, we can obtain that
p_τ(t-1) = ∑_σℬ_h,f(σ,τ)/δ_h,f(σ) p_σ(t-2),
where δ_h,f(σ) = ∑_τ∈𝒦_fℬ_h,f(σ,τ) denotes the degree of σ in 𝒢_h,f, i.e., the number of f-simplex that incident to σ.
Let p(t) = (p_σ_1(t), p_σ_2(t), ⋯, p_σ_n_h(t))^⊤ and Λ_h,f = diag( δ_h,f(σ_1),⋯, δ_h,f(σ_n_h), we can derive the equivalent matrix representation as follows:
p(t) = ℬ_h,fΛ_h,f^-1 p(t-1).
Walking back facilitates the transfer of information from the fringe simplices back to the hub simplices, in which process fringe layers function as transit stations for information.
This process is governed by:
p(t) = ℬ_f,hΛ_f,h^-1 p(t-1).
Here, ℬ_f,h=ℬ_f,h^⊤ and Λ_f,h = diag( δ_f,h(τ_1),⋯, δ_f,h(τ_n_f).
The two-step walk combines the dynamics of walking away and walking back, resulting in the following dynamic:
Λ_f,h^-1/2 p(t)
= [Λ_f,h^-1/2ℬ_h,fΛ_r,s^-1ℬ_h,f^⊤Λ_f,h^-1/2] Λ_f,h^-1/2 p(t-2).
Here, we multiply Λ_f,h^-1/2 at both left ends of the equation.
Consequently, we can define higher-order hierarchical (HoH) adjacency matrices based on the random walk dynamics in the bipartite graph 𝒢_h,f as
𝒜_h,f= Λ_f,h^-1/2ℬ_h,fΛ_r,s^-1ℬ_h,f^⊤Λ_f,h^-1/2.
The higher-order hierarchical (HoH) Laplacians matrices can then be derived as:
ℒ_h,f= I - 𝒜_h,f,
where I denotes the identity matrix.
§.§ Influential Simplices Mining Neural Network
In recent years, GNN techniques combined with higher-order network models have gained significant attention as they can effectively capture the multi-relational and multi-dimensional characteristics of complex systems.
In this study, we propose an influential simplices mining neural network (ISMnet) model, which is, in essence, a kind of simplicial convolutional network, for influential h-simplices mining tasks.
Based on the proposed HoH adjacency matrices, we can calculate embeddings for h-simplices from each fringe simplex layer as follows:
Y_f = g_f(𝒜_h,f)φ_f(X).
Here, g_f denotes the graph filter for the bipartite graph 𝒢_h,f formed by the hub h-layer and the fringe f-layer, and φ_f(X) represents a multilayer perceptron (MLP) applied to the simplex feature matrix X∈ℝ^n_h× d (n_h is the number of simplices in the hub h-layer and d represents the dimension of features).
Chebyshev polynomials are recognized for achieving near-optimal error in approximating functions, and it has been established that the inferior performance of ChebNet is predominantly attributed to the presence of illegitimate coefficients <cit.>. To address this issue and improve the model's expressiveness, we adopt the improved Chebyshev polynomial to approximate graph filters, as proposed by <cit.>. Consequently, we employ the following embedding update process:
Y = ||_f=0, f≠ h^ℱ∑_k=0^K ω_f,k/k T_k (𝒜_h,f)φ_f(X).
Here, T_k(·) is the Chebyshev polynomial, ω_f,k/k denotes the Chebyshev coefficients implemented by reparameterizing the learnable parameters ω_f,k, and the operation concatenates the representation in different spectral domains.
Subsequently, we obtain the predicted influence scores for hub h-simplices as
𝒮̂_̂ĥ = σ( ρ(Y) ),
where ρ(·) denotes a linear operation to reshape the final embeddings and σ(·) is the nonlinear activation function.
§.§ Loss Function
In our model, the predicted outcome corresponds to the influence of each simplex, and we compute loss by comparing the difference between the predicted influence ranking and the ground truth ranking.
Let 1=(1,1,⋯,1)^⊤ be a column vector with all elements equal to one.
Based on the obtained embeddings for hub simplices, we can calculate the predicted simplex ranking matrix ℛ̂ = tanh( 𝒮̂_̂ĥ1^⊤ - 1𝒮̂_h^⊤)∈ℝ^n× n with the (i,j) element ℛ̂_i,j = tanh( y_i - y_j ).
The loss function is then defined as:
loss = - ∑_i<jℛ̂_i,jℛ_i,j,
where ℛ_i,j is the ground truth ranking between simplices i and j.
To summarize, our proposed hierarchical bipartite graph model and the associated higher-order adjacency and Laplacian matrices offer a scalable and efficient approach to studying higher-order structures in complex networks. By utilizing these topological tools, researchers can effectively identify vital h-simplices and analyze higher-order network properties in a more computationally feasible manner than classical methods.
§ EXPERIMENTS
This section discusses the results obtained with the proposed framework for influential simplices mining.
The simplicial complexes used in this work to demonstrate the applicability of the proposed framework are first discussed, followed by the evaluation metrics used to report the performance of the framework.
Subsequently, we report the influential simplices mining results on empirical and synthetic SC datasets.
§.§ Datasets
In order to evaluate the performance of the proposed algorithm in identifying h-simplex more comprehensively, we choose three empirical coauthorship complexes, namely Geology, History, and DBLP <cit.>, together with six pairwise networks, namely Figeys <cit.>, GrQC <cit.>, Hep <cit.>, NZC <cit.>, Sex <cit.> and Vidal <cit.>.
A coauthorship complex is a simplicial complex wherein an academic paper, contributed to by h+1 authors, is represented by an h-simplex. The sub-simplices of the h-simplex symbolize the collaborative interactions among various subsets of authors - a refined hierarchical representation that would be missed by the hypergraph representation of papers as hyperedges between authors.
Within this framework, the influence exerted by the h-simplex is ascertained by the count of scholarly publications attributed to the collaborations involving the particular (h+1) authors.
As for these pairwise networks employed, we generate SCs by finding all cliques and treating them as simplices.
Different from acquiring simplex influence scores through empirical network sampling, we obtain influence scores for these generated SCs by various information diffusion models.
Specifically, in complex networks, the extent of infections originating from a specific initial node as the contagion source exemplifies its capacity to influence other nodes, referred to as the node infection ability.
Likewise, we define simplex infection ability as the proportion of infections originating from a specific initial simplex as the contagion source when the dynamic reaches a stable state.
Notably, in numerical experiments, simplex infection ability adopts the mean value of 1000 independent simulations to decrease the impact of noise and randomness.
Inspired by <cit.>, simplex infection ability is employed as simplex influence scores for these generated simplicial complexes, which facilitates a comprehensive evaluation of the effectiveness of different centrality measures in capturing the simplices' true influence.
The topological features of these datasets are summarized in Table <ref>.
We defer the coauthorship complexes construction details and description of employed pairwise networks to Supplementary Information.
These datasets exhibit varied topological features, allowing us to assess the adaptability and robustness of our proposed method across diverse network structures.
In order to avoid relying too much on feature engineering and to ensure the efficiency of the proposed algorithm, four classic centralities with low computational complexity are employed as the inputs, namely degree, neighbor degree, H-index, and coreness value.
We obtain the features of a simplex by averaging the features of its comprising vertices. The descriptions of these employed metrics are presented in Table <ref>.
It is important to emphasize that the simplex features employed are arbitrary.
In fact, while the selection of these features is motivated by the aim of minimizing computational complexity, GNNs are capable of processing diverse combinations of features.
This implies that, in order to achieve improved performance, one can opt for more intricate and comprehensive features.
The flexibility demonstrated by GNNs highlights their adaptability to accommodate varying feature representations, thus enabling researchers to tailor the selection of features to specific performance requirements.
§.§ Evaluation Metrics
To quantitatively evaluate the performance of various ranking methods, we adopt the Kendall rank correlation as a measure of ranking accuracy.
Let X = (x_1, x_2, ⋯, x_m) and Y = (y_1, y_2, ⋯, y_m) be two score lists of equal length m.
The Kendall rank correlation coefficient is mathematically defined as follows:
τ = c - d/m(m-1)/2,
where c represents the number of concordant pairs between X and Y, and d represents the number of discordant pairs.
Specifically, a pair of rankings (x_i, y_i) and (x_j, y_j) is considered concordant if x_i > x_j and y_i > y_j, or x_i < x_j and y_i < y_j. The pair is neither concordant nor discordant if x_i = x_j or y_i = y_j. Any other conditions are considered discordant.
The Kendall rank correlation τ ranges between -1 and 1, where a value closer to 1 indicates a higher similarity between the two sorted lists.
§.§ Empirical Influencer Mining Results
We obtain the coauthorship complexes in three specific domains, namely Geology, History, and DBLP, wherein academic cooperation by h+1 authors is represented by an h-simplex. The influence exerted by the h-simplex is ascertained by the count of scholarly publications attributed to the collaborations involving the particular h + 1 authors.
In this section, we focus on the task of mining influential 2-simplices. Therefore, 2-simplices are placed in the hub layer.
It is worth noting that we can perform simplices mining tasks of any dimension by simply changing the hub layer.
We visualize the relationship between the true 2-simplex influence scores and the average metrics of the nodes within those simplices in Fig. <ref>.
Specifically, node-level metrics include degree centrality (DC), neighbor degree (ND), higher-order degree (HD), coreness centrality (CC), H-index (HI), and true node influence score (NI).
It can be observed that the influence scores derived from node-level metrics contain a considerable number of less important and indistinguishable 2-simplices, while failing to identify some influential 2-simplices.
In particular, the two most important 2-simplices, marked with red circles in Fig. <ref>b, can be identified with ISMnet, but not with all node-level metrics in Fig. <ref>a.
These findings indicate an inconsistency between mining influential nodes and mining influential simplices, and thus the significance of nodes alone cannot adequately reflect the importance of simplices. This underscores the necessity for dedicated approaches for influential simplices mining.
To verify the generalizability of this finding and the broad applicability of our model, we carried out more experiments on more coauthorship complexes, and the results are presented in Table <ref>.
It can also be observed that the true simplex influence score exhibits the highest correlation with the values predicted by ISMnet, whereas the correlation with other traditional metrics is comparatively low.
§.§ Synthetic Influencer Mining Results
Compared to empirical complexes, synthetic data can be used to study a wide range of phenomena, from biological systems to social networks, without being limited by the specific properties of any particular empirical network <cit.>.
Moreover, synthetic data does not require real-world data collection, which can be expensive, time-consuming, and sometimes impossible.
With these advantages, we validate the ISMnet's performance by identifying vital nodes and 2-simplices and quantifying the strength of higher-order effects.
In experiments, we obtain the simplex infection ability by SIR and HSIR simulation, which is employed as true simplex influence scores for generated simplicial complexes, namely forms the label for training the ISMnet model.
§.§.§ Influential Nodes Mining
To prove ISMnet's outperformance in identifying influential nodes, we compare the ranking accuracy of ISMnet with other methods on six real-world networks by the Kendall rank correlation coefficient.
The compared methods include four traditional metrics: coreness centrality (CC), degree centrality (DC), H-index (HI), neighbor degree (ND), and two deep learning methods: RCNN <cit.>, and MRCNN <cit.>.
The importance of each triangle is calculated under the ℱ-order ISMnet model, where ℱ denotes the maximum order of the simplices that are considered in the simplicial complexes.
The maximum order of simplices considered in ISMnet is set to 5 if enough memory is available.
It is evident that ISMnet achieves the highest correlations across ten different infection rates, indicating its superior performance.
Additionally, MRCNN demonstrates the best correlation except for ISMnet, highlighting the potential of machine learning methods to surpass traditional classical metrics.
Among the traditional metrics, ND performs better than the others, while the remaining three methods show similar correlation results.
This experiment showcases the ability of ISMnet to outperform existing machine learning methods in the conventional task of identifying important nodes, thereby establishing a solid foundation for the identification of important 2-simplices.
§.§.§ Influential Simplices Mining
To explore the effectiveness of the ISMnet model for higher-order tasks, we evaluate its performance in identifying influential 2-simplices.
Likewise, we employ four traditional metrics as benchmarks, namely coreness centrality (CC), degree centrality (DC), H-metrics (HI), and neighbor degree (ND).
The corresponding simplex influence scores are then obtained by taking the average value of the three nodes' influence scores.
We also include a higher-order metric, generalized degree (HD), which directly derives the simplex influence scores.
Notably, when calculating the influence score of each 2-simplex, the maximum order ℱ of ISMnet is the same as that used in identifying 0-simplices for each dataset.
To eliminate contingency, the final influence scores given by ISMnet are calculated by repeating 100 times under each parameter.
Specifically, in SIR model simulations, we generate influence score labels under 10 different propagation probabilities: β/β_th= { 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0 }.
Fig. <ref> presents the Kendall rank correlation coefficient of each method across six datasets.
The results clearly indicate the superiority of ISMnet over other metrics in all six datasets.
In the Hep dataset, ISMnet outperforms other methods by up to 0.6 under each infection rate.
Notably, different metrics exhibit varying performance across different datasets; while ISMnet consistently performs well, showing stability and effectiveness.
In general, ISMnet consistently demonstrates the best performance on each dataset, highlighting its effectiveness and robustness.
Our framework provides a flexible and efficient approach that can adapt to diverse contexts and learn different embeddings for simplices based on the given objectives. To demonstrate this, we utilize the HSIR model, a more complex information diffusion model, to generate the simplex influence scores.
As for numerical simulation, we only consider simplices with dimensions no more than 2 for simplicity, and thus the probability of infection P_i on node i can be calculated as P_i=1-(1-β)^m_1 (1-β_2)^m_2, where m_p denotes the number of infected p-simplices that i participated in.
we generate influence scores under 6 different propagation probabilities: β_2/β= { 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 }, with β/β_th= 1.0 in all cases.
Fig. <ref> visualizes the Kendall rank correlation coefficient of each method under six datasets.
The color scares from dark blue to light yellow indicates the ascending order of correlation ranks, with dark blue representing the lowest and light yellow representing the highest rank. It is evident that ISMnet consistently achieves the highest correlation across all infection rates in the HSIR experiment, further confirming the effectiveness of the model.
Because the results of traditional indicators are not the same under different tasks and datasets, it can be concluded that finding a universal index that accurately quantifies the importance of simplices across all scenarios is not feasible. Instead, our framework provides a flexible and efficient approach that can adapt to diverse contexts and learn different embeddings for simplices based on the given objectives.
To confirm the greater impact of the vital 2-simplices selected by ISMnet on the network topology, we further immunize the important 2-simplices and conduct epidemic spreading (SIR) experiments.
Specifically, we set the nodes within the top 5% 2-simplices ranked by each method to be immunized, i.e., be in the recovery state.
Subsequently, we randomly pick 5% from the rest of the nodes to be the initial infected nodes, and the SIR simulation is processed in each network under the infection rate ranging from 0 to 25 with an interval of 0.25.
To eliminate the effect of randomness, we repeat each set of experiments 100 times.
The evaluation criterion is the final ratio of recovered proportion r at the steady state, where a lower value indicates a more effective immunization strategy.
We compare ISMnet with five centrality metrics, namely coreness centrality (CC), degree centrality (DC), H-index (HI), neighbor degree (ND), and generalized degree (HD).
In Fig. <ref>, we observe that the proportion of recovered nodes is the lowest when immunizing the 2-simplices selected by ISMnet.
This result demonstrates the superior performance of ISMnet compared to other methods, confirming its ability to identify more influential 2-simplices.
§.§.§ Quantifying Higher-order Effects
The aforementioned experiments have been conducted using the ISMnet with a maximum order of simplices up to 5 if enough memory is available.
To examine the impact of different order models on the accuracy, we compared the accuracy under ISMnet models with varying maximum orders using the Kendall rank correlation coefficient.
The ground-truth simplex influence scores are determined using the HSIR simulation model.
In Fig. <ref>, it can be observed that in the Figeys, GrQC, NZC, and Vidal datasets, the accuracy obtained by the higher-order ISMnet method is higher compared to lower-order models. However, in the Hep and Sex datasets, the accuracy does not exhibit a clear increasing trend with higher orders.
Although not all networks demonstrate a consistent increasing trend with the order, and some networks may not be sensitive to order variations, the majority of networks do exhibit such a trend.
The accuracy obtained through higher-order ISMnet provides a quantitative measure of the extent to which higher-order structures contribute to the identification of important simplices in the networks.
§ CONCLUSION
This paper introduces a novel higher-order graph representation: hierarchical bipartite graphs, where each layer contains one order of simplices in the network.
The higher-order hierarchical Laplacian is constructed based on the random-walk process on the hierarchical bipartite graphs.
Based on this novel higher-order representation, we propose an influential simplices mining neural network (ISMnet) to identify the most influential h-simplices, which is essentially a kind of novel simplicial convolutional neural network.
This new model can gather information on each order of simplices without extra embedding, leading to the situation in which it is better to use high-order embeddings to identify important high-order structures.
We prove the superior efficiency of ISMnet on whether identifying 0-simplices or 2-simplices, where ISMnet gets better performance than other methods.
Moreover, this model can also be used to identify important structures of more than 2-order, thus promising application to a variety of downstream tasks like finding the most innovative scientific group and identifying the core transportation hubs in a city.
This study holds the potential to provide unprecedented insights and emerge as a powerful tool in the analysis of higher-order networks.
Moreover, this model can also be expanded for many downstream tasks.
Enhancing community detection: Discovering vital simplices can improve community detection algorithms by incorporating higher-order structures into the partitioning process. This can lead to more accurate and meaningful community identification.
Predicting network behavior: By studying vital simplices, researchers can develop models that better predict network behavior, response to stimuli, or evolution over time.
Validating network models: Comparing the vital simplices found in empirical networks with those in synthetic network models can help assess the models' accuracy and guide further refinement.
IEEEtran
|
http://arxiv.org/abs/2307.04517v1 | 20230710122524 | Study on the Correlation between Objective Evaluations and Subjective Speech Quality and Intelligibility | [
"Hsin-Tien Chiang",
"Kuo-Hsuan Hung",
"Szu-Wei Fu",
"Heng-Cheng Kuo",
"Ming-Hsueh Tsai",
"Yu Tsao"
] | eess.AS | [
"eess.AS"
] |
Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
San Jiang,
Yichen Ma,
Qingquan Li,
Wanshou Jiang,
Bingxuan Guo,
Lelin Li,
and Lizhe Wang
S. Jiang, Y. Ma, and L. Wang are with the School of Computer Science, China University of Geosciences, Wuhan 430074, China; S. Jiang is also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China, and with the Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China. E-mail: [email protected], [email protected], [email protected]. (Corresponding author: Lizhe Wang)
Q. Li is with the College of Civil and Transportation Engineering, Shenzhen University, Shenzhen 518060, China, and also with the Guangdong Laboratory of Artificial Intelligence and Digital Economy (Shenzhen), Shenzhen 518060, China. E-mail: [email protected].
W. Jiang and B. Guo are with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430072, China. E-mail: [email protected], [email protected].
L. Li is with the Provincial Key Laboratory of Geo-information Engineering in Surveying, Mapping and Remote Sensing, Hunan University of Science
and Technology, Xiangtan 411201, China. E-mail: [email protected].
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Subjective tests are the gold standard for evaluating speech quality and intelligibility, but they are time-consuming and expensive. Thus, objective measures that align with human perceptions are crucial. This study evaluates the correlation between commonly used objective measures and subjective speech quality and intelligibility using a Chinese speech dataset. Moreover, new objective measures are proposed combining current objective measures using deep learning techniques to predict subjective quality and intelligibility. The proposed deep learning model reduces the amount of training data without significantly impacting prediction performance. We interpret the deep learning model to understand how objective measures reflect subjective quality and intelligibility. We also explore the impact of including subjective speech quality ratings on speech intelligibility prediction. Our findings offer valuable insights into the relationship between objective measures and human perceptions.
Objective measures, subjective listening tests, speech quality, speech intelligibility
§ INTRODUCTION
Speech quality and intelligibility are crucial in various speech-related applications, such as speech enhancement (SE), teleconferencing, voice conversion and text-to-speech, and hearing aids. As humans are the end-users of these applications, subjective listening tests are considered the most precise and trustworthy way to evaluate speech quality and intelligibility. However, conducting listening tests on a large number of participants is time-consuming and expensive. Therefore, a significant amount of research has been devoted to developing objective measures that can mathematically quantify speech quality and intelligibility.
Objective measures can be divided into intrusive measures, where quality and intelligibility are estimated by comparing degraded/processed speech with clean references, and non-intrusive measures, where quality and intelligibility are calculated directly on the degraded/processed speech without clean reference. Perceptual Evaluation of
Speech Quality (PESQ) <cit.> and Perceptual Objective Listening Quality Analysis (POLQA) <cit.> are intrusive speech quality measures. Despite being widely used in speech processing research, PESQ and POLQA are shown to correlate suboptimally with subjective tests <cit.>. Short-time objective intelligibility measure (STOI) <cit.> and extended STOI (ESTOI) <cit.> are popular intrusive speech intelligibility measures. However, STOI has been reported to provide suboptimal prediction capability for the subjective intelligibility results of the Wiener filtering <cit.> or deep learning (DL)-based <cit.> SE systems. Moreover, intrusive measures are less applicable to real-world scenarios because clean signals may not always be available. Compared with intrusive methods, non-intrusive methods such as ITU-T P.563 <cit.>, ANIQUE+ <cit.>, and speech-to-reverberation modulation ratio (SRMR) <cit.> overcome the limitation.
A recent approach of non-intrusive methods directly predicts objective measures by using DL models without the need of a clean signal. These models are trained to predict standard objective measures, such as PESQ and STOI <cit.>. Several studies have shown high performance using this approach, but the ground-truth labels used to train these DL models are not always aligned with human perception. To better align with human perception, researchers have started to rely on ground truth human labels for model training. DNSMOS <cit.> and NISQA <cit.> are examples of DL models trained on mean opinion score (MOS) datasets, where DNSMOS focuses on distortions in SE and NISQA on distortions in communication networks. Andersen et al. <cit.> and Pedersen et al. <cit.> used convolutional neural network (CNN) models to predict subjective intelligibility. However, owing to the time-consuming nature of conducting subjective listening tests, collecting large-scale datasets of human labels to train DL-based models is challenging.
One potential solution to bridge the gap between objective measures and human perception without relying on large-scale datasets of human labels is to predict human perception of speech quality and intelligibility by leveraging commonly used objective measures. The advantage of this approach is that it is considerably less time-consuming compared to conducting subjective listening tests. Previous studies have attempted to predict either speech quality or intelligibility using objective measures. Hu et al. <cit.> proposed composite measures for evaluating SE by linearly combining objective quality measures. Liu et al. <cit.> showed that ASR and objective quality measures have the potential to estimate intelligibility under noisy conditions. Ma et al. <cit.> reported that objective measures originally designed to predict speech quality can reliably predict the intelligibility of noise-suppressed speeches. However, there is a lack of research that considers both quality and intelligibility criteria and provides interpretations of the relationship between objective measures and human perception to indicate how objective measures reflect subjective quality and intelligibility in practical usage.
In this study, we first time proposed using DL models that take a combination of off-the-shelf objective measures as inputs to predict subjective quality and intelligibility ratings. We evaluated the correlation between commonly used objective measures and subjective ratings of quality and intelligibility on a Chinese dataset called TMHINT-QI <cit.>, and then use DL techniques to propose new objective measures composed of all of the used objective measures. We demonstrated that the proposed DL model can achieve strong performance in predicting subjective quality and intelligibility ratings, even when trained on small amounts of training data. This core strength makes the DL model practical for real-world applications, as it can still maintain high accuracy without requiring a large amount of training data. Furthermore, we interpreted the proposed DL models to describe the relationship between the objective measures and subjective ratings of speech quality and intelligibility. We also investigated the potential improvement in intelligibility prediction by incorporating subjective quality ratings. This allows us to leverage the extensive research conducted in the field of speech quality assessment to enhance the assessment of speech intelligibility. Our results can provide valuable insights into the utility and limitations of objective measures in reflecting subjective quality and intelligibility ratings and potentially contribute to bridging the gap between objective measures and human perception.
The remainder of this paper is organized as follows. Section <ref> describes the objective measures used in our experiments. Section <ref> details our dataset and presents our correlation analysis. We present our experimental setup and results in Section <ref>. Finally, we conclude the paper in Section <ref>.
§ OBJECTIVE MEASURES
This study investigates several objective measures to predict subjective speech quality and intelligibility. These measures are categorized as either intrusive or non-intrusive, depending on whether a clean reference is required or not. In this section, we describe both types of objective measures.
§.§ Intrusive objective measures
Six different intrusive objective measures were assessed: PESQ, ITU-T P.835, normalized covariance metric (NCM), STOI, ESTOI, and word error rate (WER). PESQ evaluates speech quality and ranges from -0.5 to 4.5. ITU-T P.835 evaluates speech quality in terms of three aspects: signal quality (SIG), background noise (BAK), and overall quality (OVRL) <cit.>. NCM assesses the covariance between the envelopes of the clean and degraded/processed speech and provides scores ranging from 0 to 1 <cit.>. STOI and ESTOI evaluate speech intelligibility and have scores between 0 and 1. Finally, WER is calculated using Google ASR <cit.>.
§.§ Non-intrusive objective measures
In addition to intrusive measures, two non-intrusive objective measures were also evaluated: DNSMOS P.835 <cit.> and MOSA-Net <cit.>. DNSMOS P.835 is a multi-stage self-teaching based model that evaluates speech quality based on three aspects: signal quality (DNSMOS-SIG), background noise (DNSMOS-BAK), and overall quality (DNSMOS-OVRL). MOSA-Net uses time and spectral features and latent representations from a self-supervised model, and is originally trained to predict several objective metrics, but can be adapted for MOS predictions. Here we adopted the MOS prediction results of the MOSA-Net.
Overall, four quality measures (PESQ, ITU-T P.835, DNSMOS P.835, MOSA-Net) and four intelligibility measures (NCM, STOI, ESTOI, WER) were involved in this study. Notably, we were able to obtain several objective measures, such as WER, DNSMOS, and MOSA-Net, by leveraging pre-trained APIs from third-party sources, eliminating the need for additional efforts to acquire these pre-trained models or gather extensive amounts of training data.
§ CORRELATIONS BETWEEN OBJECTIVE AND SUBJECTIVE ASSESSMENTS
§.§ Dataset
We conducted experiments using TMHINT-QI [TMHINT-QI dataset: http://gofile.me/6PGhz/4U6GWaOtY; TMHINT-QI dataset description: https://github.com/yuwchen/InQSS] <cit.>, a Chinese corpus containing noisy and enhanced data. To form the noisy data, we corrupted the clean speech from the TMHINT dataset with four types of noise (babble, street, pink, and white) at four different SNR levels (-2, 0, 2, and 5 dB). The noisy data was then enhanced by the minimum mean squared error (MMSE), Karhunen-Loéve transform (KLT), deep denoising-autoencoder (DDAE), fully convolutional network (FCN), and transformer model (denoted as Trans).
Human listeners were recruited to evaluate the subjective TMHINT-QI scores. A total of 226 people aged between 20 and 50 years participated in the listening test. The quality score ranges from 1-5, where a higher value indicates a better speech quality. The intelligibility score calculates the number of words correctly recognized by listeners in a ten-word sentence; the intelligibility score ranged from 0-10. A higher intelligibility score indicates that the listeners correctly identify more words. There were 24,408 samples in total. We followed the setup in <cit.> to split the TMHINT-QI dataset into training and test sets. The subjective scores of each utterance were averaged to obtain its ground-truth score. Hence, the final training and test sets contained 12,937 and 1,978 unique utterances, respectively, along with their subjective quality and intelligibility scores. The training set was randomly split into 90% for training and 10% for validation in accordance with <cit.>. More details can be found in <cit.>.
§.§ Correlation analysis
We investigated the relationship between subjective quality and intelligibility ratings and objective measures of the test data by calculating the Pearson correlation coefficient (PCC). The correlation values between the subjective and objective measures are presented in Fig. <ref>, along with the correlation between subjective quality and intelligibility. Several observations are reported in these figures. We first found that human perceptions of quality and intelligibility are moderately correlated with each other, showing a correlation of about 0.68 between subjective quality and intelligibility ratings. In addition, we observed that all objective measures, with the exception of WER, demonstrated higher correlations with subjective quality compared to subjective intelligibility.
For subjective quality, it is interesting to note that a high correlation is expected to PESQ, but objective intelligibility measures (ie, NCM, ESTOI, STOI, and WER) are more highly correlated with subjective quality ratings. For subjective intelligibility, the correlations of objective quality measures (ie, PESQ, ITU-T P.835, DNSMOS P.835, and MOSA-Net) are generally lower (below 0.24), which is reasonably expected for which they were originally designed to predict speech quality. Interestingly, in relation to speech intelligibility, high correlations are expected between objective intelligibility measures (ie, NCM, ESTOI, STOI, and WER), but all except WER have moderate correlations with subjective intelligibility in our dataset. We also exploited the correlations of either STOI or PESQ with WER. Fig. <ref> shows the scatter plots of WER against PESQ and STOI. Our finding is consistent with previous studies <cit.>, which show that the correlation value between STOI and WER is higher than that between PESQ and WER. This supports the results in <cit.> that integrating STOI into SE model optimization can improve WER on the enhanced speech. In summary, the strongest absolute correlation with subjective quality is found in NCM, followed by ESTOI and STOI. For subjective intelligibility, WER shows the highest absolute correlation, followed by subjective quality and NCM.
§ EXPERIMENTS
§.§ Experimental setup
The correlation analysis in Section <ref> indicates that none of the objective measures show a strong correlation (above 0.8) with subjective quality and intelligibility ratings, which aligns with the findings of <cit.> that no single objective measure demonstrates a high correlation. Thus, we seek to develop a DL model which takes a combination of objective measures as inputs to predict corresponding subjective quality and intelligibility scores. Fig. <ref> illustrates the details of the proposed DL model. Each of the objective and subjective measures was normalized using min–max to be between 0 and 1 before feeding into the DL model. Twelve objective measures are utilized as input for the DL model. The DL model consists of six dense layers, where each dense layer is followed by GELU activation, except for the last layer, which is followed by a sigmoid activation. This sigmoid activation produces values between 0 and 1, which are then divided into two separate tasks, one for quality estimation and the other for intelligibility prediction. Afterward, the output values are denormalized to obtain the predicted subjective quality and intelligibility scores. To evaluate the performance, three criteria: mean squared error (MSE), PCC and Spearman’s rank correlation coefficient (SRCC) were selected as evaluation criteria.
§.§ Experimental results
In our comparative analysis, we examined the performance of the proposed DL model in relation to the LR model, which predicts subjective quality or intelligibility scores separately. Additionally, we incorporated two DL-based non-intrusive speech assessment models into our evaluation. The first model, InQSS, combines self-supervised models and scattering transform. It has the capability to predict both subjective quality and intelligibility simultaneously, as outlined in <cit.>. The second model, MOS-SSL, utilizes fine-tuned features from wav2vec 2.0 to predict MOS <cit.>. We trained the MOS-SSL model on the TMHINT-QI dataset using a single task criterion to predict quality and intelligibility scores as separate targets.
Table <ref> summarizes the results. The superior performance of the DL model over the LR model is evident in both subjective quality and intelligibility prediction. Additionally, the effectiveness of the DL model is confirmed by achieving higher PCC and SRCC scores compared to the InQSS and MOS-SSL methods. These results demonstrate the improved accuracy and reliability of the DL model in predicting subjective quality and intelligibility.
We also examine how well the proposed DL model predicts compared to InQSS and MOS-SSL when different quantities of training data are accessible. To avoid the time-consuming process of conducting listening tests, it is preferable to possess a model that demands less training data but still performs comparable results. Table <ref> illustrates the percentage decrease in PCC for various percentages of training data, while Fig. <ref> visually represents the changes in PCC and SRCC as the number of training data varies. Table <ref> clearly shows that when trained with only 25% of the data, all three models were close to reaching saturation. The InQSS and MOS-SSL models had a decrease within 3%, while the DL model for quality prediction had a decrease of 1%. For intelligibility prediction, the InQSS and MOS-SSL models had a decrease within 8%, while the DL model had a decrease of 5%. Furthermore, the DL model demonstrated its superiority in performance with a percentage decrease of 3.4% for quality prediction and 4.6% for intelligibility prediction when trained with only 5% of the training data. Fig. <ref> also clearly demonstrates that DL models consistently outperform InQSS and MOS-SSL models. Moreover, it shows that the increase in PCC and SRCC values gradually slows down when the amount of training data exceeds 1,000. The overall analysis indicates that InQSS and MOS-SSL heavily relies on a large amount of training data and exhibits promising prediction performance only when more training data is available. In contrast, the proposed DL model's capacity to achieve good performance with limited amounts of training data is a significant advantage. This is particularly beneficial because collecting subjective human ratings is a challenging and expensive process, and being less reliant on a large amount of data is highly advantageous.
§.§ Interpretation of the DL model
Our aim is to investigate how each objective measure impacts the prediction performance of subjective quality and intelligibility. In order to uncover the underlying functional relationship between these measures of the DL model, we generated subjective quality and intelligibility scores by feeding data samples obtained from a multivariate normal distribution into the DL model. The subjective quality and intelligibility scores were then divided into 200 equal parts based on the values of the objective measures being analyzed. The scores for each part were averaged, resulting in 200 scores for each subjective quality and intelligibility. These scores were connected to form a line graph, which illustrates the functional relationship between the quality or intelligibility scores and the objective measures. We repeated this process 1,000 times and the functional relationship between the objective and subjective measures of the DL model is depicted in Figure <ref>, where the solid line represents the mean and the light-colored areas represent the standard deviation of the 1,000 lines. We limited our focus to several objective measures due to space limitations.
From Fig. <ref>, it is evident that the relationships between objective measures and subjective quality is relatively linear compared to subjective intelligibility. As for subjective intelligibility, it can be noted that the slope gradually becomes less steep as objective measures increase. This implied that higher values of objective measures can accurately demonstrate the expected improvement in subjective quality, but not necessarily in subjective intelligibility. In addition, we found that subjective measures decline when DNSMOS-BAK reaches approximately 2.0. More specifically, there is a significant reduction in subjective intelligibility, decreasing from 9.2 to 8.8, while subjective quality experiences a less drastic drop, going from 3.4 to 3.2. Our findings suggest that this phenomenon occurs because attempts to suppress background noise inevitably result in speech distortion, which has a negative impact on speech quality and intelligibility. We can observe that individual objective measures cannot fully capture subjective quality and intelligibility with perfection. This observation supports our rationale for integrating all objective measures in order to establish a strong correlation with subjective listening tests.
§.§ Enhancing intelligibility prediction through the incorporation of subjective quality
While our DL model predicts both subjective quality and intelligibility simultaneously, we are interested in exploring whether including subjective quality can enhance the prediction of intelligibility. The moderate correlation of 0.68 between subjective quality and subjective intelligibility indicates a potential association between the two factors. Consequently, we propose that integrating subjective quality ratings has the potential to enhance the prediction of subjective intelligibility, to some extent at least. Meanwhile, opting for quality tests instead of intelligibility tests provides significant advantages in terms of saving effort. Quality tests require less time compared to the time-consuming process of listening intelligibility tests, which involve word identification for calculating intelligibility scores. Therefore, choosing quality tests offers a more time-efficient approach.
We introduced modifications to the proposed DL model by including subjective quality scores as additional inputs. As a result, the model's primary objective shifted to predicting subjective intelligibility scores while taking into account these subjective quality scores. Table <ref> demonstrates a significant improvement in subjective intelligibility prediction when incorporating subjective quality ratings. The PCC value increased from 0.792 to 0.870, validating the effectiveness of using subjective quality to predict intelligibility. The inclusion of subjective quality ratings represents a valuable contribution to improving the accuracy of intelligibility predictions. By integrating subjective quality, we can harness the extensive research conducted in the field of speech quality assessment to enhance the assessment of speech intelligibility.
§ CONCLUSION
The contributions of this study are four fold. First, the study proposes the use of DL models that take a combination of off-the-shelf objective measures as inputs to predict subjective quality and intelligibility ratings. Second, we evaluate the proposed DL model against different speech assessment methods and analyze the percentage decrease in PCCs as the amount of training data varies. The experimental results highlight the significant advantage of our DL model, which exhibits strong performance even with a small amount of training data. This is particularly beneficial in situations where gathering subjective human ratings is arduous and expensive. Thirdly, we provide insights into how objective measures reflect subjective quality and intelligibility. This analysis can help researchers better understand the relationship between objective measures and subjective measures. Fourthly, we demonstrate that incorporating subjective quality ratings can improve the prediction of subjective intelligibility. This integration allows us to leverage the extensive research conducted in the field of speech quality assessment to enhance speech intelligibility evaluation. Additionally, quality tests offer a time-saving advantage compared to the more time-consuming process of listening intelligibility tests.
IEEEbib
|
http://arxiv.org/abs/2307.06158v1 | 20230712133105 | Scaled Tight-Binding Crystal | [
"Peter Schmelcher"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP",
"physics.optics"
] |
[email protected]
Zentrum für Optische Quantentechnologien, Fachbereich Physik, Universität Hamburg,
Luruper Chaussee 149, 22761 Hamburg, Germany
The Hamburg Centre for Ultrafast Imaging, Universität Hamburg, Luruper Chaussee 149, 22761 Hamburg, Germany
The concept of local symmetry dynamics has recently been used to demonstrate the
evolution of discrete symmetries in one-dimensional chains leading to emergent periodicity.
Here we go one step further and show that the unboundedness of this dynamics
can lead to chains that consist of subunits of ever increasing lengths which results
in a scaled chain. Mapping this scaled chain onto a corresponding tight-binding
Hamiltonian we investigate its spectral and transmission properties. Varying the
off-diagonal coupling the eigenvalue spectrum shows different branches with characteristic
transitions and peaks in the corresponding density of states. The fluctuations of the energy levels exhibit a hierarchy of minigaps
each one accompanied by a characteristic sequence of energy spacings. We develop a
local resonator model to describe the spectral properties and gain a deeper understanding
of it in the weak to intermediate coupling regime. Eigenstate maps together with
the inverse participation ratio are used to unravel the characteristic (de-)localization
properties of the scaled chain with varying coupling strength. Finally we probe
the energy-dependent transmission profile of the scaled chain.
Scaled Tight-Binding Crystal
Peter Schmelcher
August 12, 2023
============================
§ INTRODUCTION
The structure, design and applications of materials that follow a certain order principle
represents a central theme in modern quantum physics <cit.>.
Symmetries play in this context a pivotal role since they provide an important characteristic for
the classification and description of the systems under investigation. A prime example are periodic crystals based
on the existence of a discrete translation symmetry that provides us with the Bloch theorem
and the celebrated concept of band structure analysis <cit.> with vast applications
in modern material science. Quasicrystals <cit.>,
on the other hand side, fall into the gap between perfectly periodic crystals and disordered structures.
Quasiperiodic order adds new categories to the spectral classification chart, such as singular continuous
energy spectra and lattice Fourier transforms, and lead to novel physical properties emerging from the
fractal nature of their energy spectra. Opposite to periodic crystals quasicrystals are not based on
global symmetries, such as a discrete translation group, but typically exhibit a plethora of local symmetries <cit.>
embedded into their self-similar structure.
Structures built on basis of the concept of local symmetries, i.e. symmetries that hold only in a limited domain of space,
are very well-suited to further fill the above-mentioned gap between global order and disorder.
Indeed, several recent works <cit.>
have been focusing on the development of a theoretical framework of the impact of local symmetries for both
continuous and discrete systems.
Among others, it has been demonstrated that local symmetries lead to invariant non-local currents which
allow for a generalization of the Bloch theorem <cit.>. Sum rules imposed on these invariants can
serve as a tool to classify resonances in wave scattering <cit.>. These invariants and the corresponding
control of local symmetries have been detected in lossy acoustic waveguides <cit.> and were observed
in coupled photonic wave guide lattices <cit.>. Systematically introducing more and more of local
symmetries into one-dimensional disordered finite chains has been demonstrated to enhance the corresponding
transfer efficiency across the chain <cit.>. An important aspect of the presence of local symmetries
in the strong coupling regime is the 'formation' of so-called local resonators on which the localization
of eigenstates take place. This characteristic was developed and used in ref.<cit.> to analyze the eigenstate
properties and edge state appearance for quasiperiodic chains of different spectral category.
Inspired by the substitution rules used to generate quasiperiodic chains very recently the concept of
local symmetry dynamics (LSD)
has been put forward <cit.> (see also <cit.>) to obtain one-dimensional lattices with
a plethora of local symmetries that do not belong to the periodic or quasiperiodic case.
The idea here is to generate a lattice by applying successive reflection operations on an initial seed
of the lattice consisting of a finite number of sites. The rules generating the LSD
can be manifold, but the first case explored in ref.<cit.> are the so-called n:m rules.
n and m indicate the number of sites of the lattice involved in the reflection operations and
are applied alternatingly in the course of the LSD. It has been shown that
the such created one-dimensional lattice shows emergent periodic behavior, i.e. it consists of a
transient whose length depends on the concrete values of n,m followed by a subsequent
periodic behavior. By construction, the local symmetries of this lattice are strongly overlapping.
A spectral analysis of the tight-binding (TB) realization of the n:m LSD chains demonstrated the control possibilities
of the localization properties of the eigenstates by the nested local symmetries.
In the present work we go one step further and establish a type of rule which does not possess emergent
periodicity but leads to a scaling behavior of the chain. As a consequence we have similar repeating units along the
lattice but, in each step, they are stretched with respect to their lengths i.e. the number of sites
is correspondingly increased. We perform a detailed spectral analysis of a TB implementation of the scaled chain.
The eigenvalue spectrum shows a distinct transition from two to three branches and finally to
a single branch with increasing off-diagonal coupling strength. The fluctuations of the energy level spacings
cover several orders of magnitude and exhibit minigaps. A density of state analysis shows a strongly peaked
behavior at the crossover points of the branches. We develop a local resonator model which allows us to
interpret and understand this spectral behavior. Our eigenstate analysis demonstrates the unique localization
properties of this scaled chain and in particular their variation with changing coupling strength.
Finally we investigate the energy-dependent
transmission profile by attaching leads to the scaled chain. It shows a transition from few to many isolated complete transmission
peaks and finally, for smaller values of the coupling, we observe
a decreasing spectral transmission window with an irregular fluctuating behavior.
This work is structured as follows. In section <ref> we introduce our LSD rule and the resulting
scaled chain and map it onto a TB Hamiltonian. Section <ref> presents an analysis of the energy
eigenvalue spectrum of scaled chains including the energy spacing distributions and density of states.
In section <ref> we develop the local resonator model which offers a deeper
understanding of the spectral properties and we compare it to the TB results. Section <ref>
provides an analysis of the eigenstates including their localization behavior. The transmission properties
of our scaled chain are explored in section <ref> with varying coupling strength. Finally,
in section <ref> we present our conclusions.
§ THE SCALED LSD CHAIN: SETUP AND HAMILTONIAN
The local symmetry dynamics (LSD) represents a concept which allows us to generate lattices with local symmetries
starting from a given initial condition, i.e. from an initial finite segment of a lattice. One important way to
achieve this is to perform reflection operations of a certain domain size at the end of a given finite lattice.
In ref.<cit.> the special case of the n:m rules, where n,m stand for the number of sites
to be reflected alternatingly, has been investigated: it provides us with emergent periodicity in the sense
that a spatially evolving transient is followed by a periodic behavior of the lattice. By construction, the resulting lattice
exhibits a plethora of local overlapping symmetries.
Employing a symbolic code we focus here on the rule n,(n+1),(n+2),(n+3),...
which represents an LSD with monotonically increasing sizes of the
reflection domains. The initial seed has to be n elements and we use here n=2 i.e. the seed AB. As an example, we
provide the tenth generation of the application of this rule which reads as follows
AB |_2 BA |_3 ABB |_4 BBAA |_5 AABBB |_6 BBBAAA
|_7 AAABBBB |_8 BBBBAAAA |_9 AAAABBBBB
|_10 BBBBBAAAAA |_11 AAAAABBBBBB
|_12 BBBBBBAAAAAA |_13 AAAAAABBBBBBB
where |_k stands for the reflection operation exerted on k sites to the left of its position.
Obviously, our rule leads to a scaling behavior in the sense that we have alternating sequences of
A and B sites whose lengths increase with increasing generation of the chain. Alternatively,
this can be noted as 1A,2B,2A,4B,4A,6B,6A,....2nB,2nA which amounts to a total length of N = 1 + 2n(n+1).
We therefore call this chain a scaled chain (SC).
In order to explore the spectral and transmission properties of the SC we map it onto a corresponding
TB Hamiltonian <cit.>.
We hereby assume a constant off-diagonal coupling t between nearest neighbors ⟨ i,j ⟩
of a discrete chain of length N with sites {i|i=1,...,N}. The corresponding on-site energies
ϵ_i follow the LSD of the SC. Specifically we will use in the following the values
ϵ_A = 1.0, ϵ_B = 2.0 for the sites of type A,B, respectively. Our TB Hamiltonian
reads therefore as follows
H = ∑_i=1^Nϵ_i |i ⟩⟨ i| + ∑_⟨ i,j ⟩ t |i ⟩⟨ j|
Note that we are using open boundary conditions for the SC allover this work.
§ ENERGY EIGENVALUE SPECTRA
Let us analyze in this section the energy eigenvalue spectra of the SC for varying off-diagonal t from
weak to strong couplings. First we inspect the global spectral behavior and subsequently
the fluctuations in terms of the energy level spacing will be discussed. We hereby focus on a
SC of length 1861 which corresponds to subchains of purely A or B sites of maximal lengths 30.
For t=0 we have only the two highly degenerate eigenvalues ϵ_A =1.0 and
ϵ_B = 2.0. Switching on the coupling we observe in Fig.<ref>(a) for t=0.1 an energetically lower and upper
branch of the spectrum (we refrain from using the terminology of a band, due to the non-periodic structure
of our chain) which are separated by an energetical gap. Increasing the value of t the size of this gap decreases
until at t ≈ 0.25 the gap closes, which can be observed in Fig.<ref>(b). Then, the low and high
energy branch are connected while possessing a similar behavior of their slopes with varying energy.
With further increasing coupling strenght t a third branch appears and persists for a broad range of values
of t. The crossovers between the branches is characterized by a cusp. This can be observed in
Fig.<ref>(c) for t=0.5 where the intermediate energy branch occupies, like the low
and high energy branch, a substantial part of the spectrum. With increasing value of the coupling the intermediate
energy branch widens until at t ≈ 30 (not shown here)
it has taken over almost all of the spectrum. Fig.<ref>(d) shows
the spectrum for t=50 where only a single branch has survived. In this case we are close to the limit of
a negligible on-site energy compared to the large coupling value, which yields a single branch (t →∞)
with the spectrum given by E_m = 2 t cos (m π/N+1) with 1 ≤ m ≤ N <cit.>.
The above discussion relates exclusively to the envelope or mean behavior of the spectrum. Let us now address the
fluctuations of the energy levels which are well-characterized by the spacing of the energy levels. Fig.<ref>
shows the spacing of the energy levels in a window of the spectrum between the energy levels 1570 to 1730.
Notably we observe that the energy spacing covers several orders of magnitude. Interdispersed into seemingly
irregular oscillations there is two types of prominent features. First we encounter minigaps in the spectrum corresponding to
well-isolated distinguished peaks in Fig.<ref>. Second, before and after those peaks we observe sequences of very small
energy spacings lying on arcs. Examples for both features are indicated by arrows in Fig.<ref>. Their origin will become
clear in the context of the local resonator model to be developed in the next section.
In Fig.<ref> the energy spacing is shown for the complete eigenvalue spectrum for four different values
of the coupling i.e. for t = 0.2,0.5,5.0,50.0 in subfigures (a-d), respectively. For t=0.2
the eigenvalue spectrum is still gapped which reflects itself in a dominant peak of the spacing distribution at
the position 930, whose value is out of the scale provided in Fig.<ref>(a).
Left and right to this central peak there are two broad subdistributions
with strongly fluctuating values for the spacings. On top of these subdistributions there is a central dominant
peak as well as a number of additional prominent peaks which correspond to the above-mentioned minigaps
in the spectrum. Those two branches of the spacing distributions are single humped and show monotonically
decreasing spacings towards their edges.
Fig.<ref>(b) corresponds to the case t=0.5 for which the eigenvalue spectrum possesses three branches (see Fig.
<ref>(c)). Here the energy spacing distribution possesses also three distinct regions with abrupt transitions
between them: the position of the transition points correspond to the positions of the cusps of the energy spectrum.
At these positions of the cusps dominant peaks of the energy spacing
occur followed by a collapse of the spacing behavior with further increasing degree of excitation in the spectrum. Overall
the three branches encountered consist of two narrow semi-humps connected to the edges of the distribution and a complete
broad hump in the center of the
spacing distribution. With increasing coupling strength t the central single-humped branch of the
spacing distribution expands and finally represents the complete distributions. On this pathway the central branch
bends upwards as is clearly visible in Fig.<ref>(c,d) for t=1.0,50.0 implying that the spectrum of the spacing
values possesses an increasing lower bound with increasing coupling strength. Finally, for t=50.0 in Fig.<ref>(d)
the spacing distribution is already rather similar to the one expected from the off-diagonal only case: it possesses
a narrow width and only small fluctuations around the spacing values belonging to the spectrum
E_m = 2 t cos (m π/N+1) with 1 ≤ m ≤ N.
Let us conclude this section by analyzing the energetical density of states (DOS) belonging to the SC for varying coupling
strength t. For a periodic crystal of monomers the DOS can be obtained analytically to
N(E) ∝1/√(1-(E/2t)^2) possessing two singularities at the band edges and in
between a smooth decrease followed by a corresponding increase. Fig.<ref>(a) shows the case t=0.1 for which
a sizeable gap between a low and high energy branch of the eigenvalue spectrum exists (see discussion of
Fig.<ref>). Correspondingly we observe four pronounced peaks for N(E) at the edge points of those two branches.
In between the first two and the second two peaks a smooth decrease followed by a corresponding increase is
encountered. The gap in the corresponding spectrum shows here up, of course, as a region of zero valued N(E).
Fig.<ref>(b) presents the DOS for t=0.5. Here the eigenvalue spectrum consists of three branches (see Fig.<ref>(c))
and we observe two edge localized peaks and two peaks localized around the center of the DOS. The latter correspond
to the positions of the cusps in the eigenvalue spectrum. Note that abrupt transitions occuring for the left and right
side of the second and third peak of the DOS, respectively. For t=1.0 (Fig.<ref>(c)) the central branch of the
DOS has widened i.e. the corresponding central peaks have been moving towards the edges of the DOS thereby maintaining
their narrow character. Finally, for t=50.0 only two peaks remain and are edge localized, as to be expected from
the single branch case of an (approximately) off-diagonal only TB Hamiltonian.
§ LOCAL RESONATOR MODEL
In order to develop a profound understanding of the above-discussed features of the eigenvalue spectrum, we
develop a so-called local resonator model (LRM) for our SC. This model is inspired by the
local symmetry theory of resonator structures developed in ref.<cit.>. In the latter work a quantitative analysis
of the localization behavior of the eigenstates for strong and intermediate contrast has been provided for
aperiodic binary chains based on substitution rules thereby focusing on the quasiperiodic case.
Our SC 1A,2B,2A,4B,4A,6B,6A,....2nB,2nA consists of a sequence of purely A and B subchains of
increasing length with increasing size of the SC. We will call these subchains local resonators.
Starting out with small values for the off-diagonal coupling t we model the SC as a superposition
of the spectra of these local A and B resonators. This is motivated by the fact that for zero coupling
the resonators exhibit a degenerate spectrum which is splitted for small but finite coupling whereas the
coupling between two different resonators is suppressed due to the substantial difference of their on-site energies
ϵ_A = 1.0 and ϵ_B = 2.0. Employing open boundary conditions we therefore have the sequence
of spectra for the local resonators as follows
E^1_A = E_A
E^2_m(A) = E_A + 2 t cos( m π/3) m = 1,2
E^2_m(B) = E_B + 2 t cos( m π/3) m = 1,2
E^4_m(A) = E_A + 2 t cos( m π/5) m = 1,...,4
E^4_m(B) = E_B + 2 t cos( m π/5) m = 1,...,4
...................................
E^n_m(A) = E_A + 2 t cos( m π/(n+1)) m = 1,...,n
E^n_m(B) = E_B + 2 t cos( m π/(n+1)) m = 1,...,n
The corresponding eigenstates are therefore localized resonator eigenstates. Note that their superposition
is in general not an (even approximate) eigenstate of the SC, since the resonators possess different lengths.
The total spectrum of the SC within this local resonator picture is given by
{E_A, {E_A + 2 t cos( m π/(n+1))
|n = 2k, k ∈ [1,l],
m ∈ [1,n] }}, {E_B + 2 t cos( m π/(n+1)) |n = 2k,
k ∈ [1,l], m ∈ [1,n] }}
where the length of the SC, i.e. the number of sites, is given by N = 1 + 2l(l+1) and the number of
resonators in the SC is 2l+1. The largest resonator consists of 2l sites. The envelope or mean behavior
of the spectrum provided by our LRM agrees with many of the envelope features of the exact SC spectrum.
E.g. it can describe the weak coupling broadening of the two bands, the subsequent closing of the energy
gap as well as the emergence and widening of the third branch with increasing coupling. Even in the t →∞
limit which corresponds to a single branch case the spectral envelope behavior can qualitatively be reproduced.
From the LRM we can draw some general conclusions. The size of the energy gap for not too strong coupling t amounts to
approximately 1-4t which means that the closing of the gap occurs for t ≈ 0.25. The width of the central third
branch for t > 0.25 amounts to 4t-1. Fig.<ref>(a) shows a comparison of the LRM and the exact TB spectrum for
a SC of length 221 and for t=1.0: while the overall qualitative behavior is very similar one realizes
significant deviations on smaller energy scales.
For this reason let us now analyze the energy eigenvalue spacing
distribution of the LRM which provides us with the fluctuations of the energy levels.
Fig.<ref>(b) shows the spacing distribution for both the LRM and the exact TB spectrum
for t=0.1. While the overall rough behavior is certainly similar, a closer look reveals remarkable
deviations. The fluctuations are much stronger for the LRM compared to the TB results even for these small
values of the coupling.
A substantial number of the peak spacings agree within the two approaches and in particular the 'arc-like'
accumulation of small spacings around the main peaks is reproduced well. However, an eye-catching difference
is the fact that the LRM shows zero spacings, corresponding to degeneracies in the LRM, whereas this
is not the case for the TB spectrum. Considering the spectrum of the LRM in eq.(<ref>) these degeneracies
can be shown to occur for the set of 2-tuples satisfying for a given m and k and for varying α
[ α m; α (2k+1); ]αodd, α∈ℕ, m ≤ 2k, k ≤ l
Note that the eigenstates belonging to these degenerate eigenvalues belong in general to different local resonators.
The lifting of these degeneracies in the TB spectrum is, of course, due to the interresonator coupling which is
neglected within the LRM. An important remark is in order concerning the regime of stronger couplings t>1.
In section <ref> it has been shown that the center branch of the eigenvalue spacings
exhibits an increasing upward bending with increasing coupling t (see Fig.<ref>(c,d)), meaning that
the energy spacings systematically 'avoid small values'. This behavior cannot be described by the LRM, which
shows its inadequacy in the strong coupling regime.
§ LOCALIZATION VERSUS DELOCALIZATION OF EIGENSTATES
Let us now focus on the analysis of the eigenstate properties of the SC for varying coupling strength t with
a particular emphasis on their localization properties. As described above in the framework of the LRM we expect for weak
coupling strengths that the local resonator picture represents a good approximation and therefore the eigenstates
should be localized within these resonators whose lengths increase monotonically along the SC.
Fig.<ref>(a) shows the eigenstate map, i.e. a greyscale image of the magnitudes of the components
of the eigenstates with varying degree of excitation for the complete spectrum of a SC of length 221 for t=0.1.
The ground state of the SC is localized on the largest A local resonator of length 20 appearing at the right end
of the SC. The first excited state of the SC is localized on the second largest A local resonator to the left of the
ground state. This sequence continues (see Fig.<ref>(a)) such that with increasing degree of excitation
the corresponding eigenstates localize on local A resonators with decreasing size, thereby moving from the
right end towards the left end of the SC. This series of localized states are then intermingled with excited
states of the local resonators which adds to the localization patterns observed in Fig.<ref>(a).
On the o.h.s. the spectrum can alternatively be described as follows. It consists of vertical series of
localized resonator eigenstates whose energy spacings decrease with increasing size of the corresponding
resonator. Series of localized resonator eigenstates of A character belong to the low energy part
of the spectrum whereas those of B character belong to the high energy part. According to their occurence
in the SC they are spatially shifted w.r.t. each other.
Inspecting the case t=0.5 in Fig.<ref>(b) a branch of delocalized states can be observed
for intermediate energies. This branch intensifies and broadens in energy with increasing coupling
strength t. Indeed, for Figs.<ref>(c,d) corresponding to t=5.0,50.0 the majority of eigenstates
are delocalized over the complete SC. In conclusion, we obtain a distinct transition from series of localized resonator
states for weak couplings to delocalized SC states for strong couplings which is controlled by the
interresonator coupling of A (low energy) and B (high energy) character.
To quantify the above observations let us determine the inverse participation ratio (IPR) for the eigenstates on
an SC with N sites, which is defined as r = ∑_i=1^N |ψ_i|^4 ∈ [N^-1,1]
for a normalized eigenvector ∑_i=1^N |ψ|^2 = 1. The maximal value for the IPR is one for an
eigenvector localized on a single site of the chain and the minimal value 1/N is encountered for
a state which is uniformly extended over the chain. The IPR neither depends on the position at which a
state is localized nor is it very sensitive to the details of its distribution.
Fig.<ref>(a-d) shows the IPR for all eigenstates of the SC with length 221 for four different values of the coupling
strength t= 0.1,0.5,1.0,5.0. For t=0.1 in Fig.<ref>(a) the IPR has a lower bound of approximately 7 · 10^-2
compared to the principally possible
minimum of approximately 4.5 · 10^-3. Most values are in the interval [0.07,0.5] which indicates
that the eigenstates are very localized. The sequence of local resonator localized states with increasing energy
which have been analyzed in the context of
the discussion of Fig.<ref> for t=0.1 can be seen here as a sequence of subsequent monotonically increasing
IPR values. One can also observe sequences of almost constant values of the IPR which correspond to energetically
non-neighboring eigenstates, i.e. among others involving the vertical series of localized states seen in Fig.<ref>.
For t=0.5 we observe a rather sharp decrease and a regime of low-values of the IPR in the central part
of the spectrum. The latter branch of eigenstates corresponds to the delocalized states observed in the
corresponding eigenstate maps of Fig.<ref>. With further increasing values of t (see Fig.<ref>(c,d))
the IPR value of that average central plateau decreases monotonically and the
fluctuations on top of it systematically decrease. Finally, at t=50.0, apart from a small window of
states for very low and high energies, the IPR value is to a very good approximation constant
and its value is close to the minimum possible value, i.e. it is approximately 6.5 · 10^-3.
§ TRANSMISSION PROPERTIES OF THE SCALED TB CHAIN
In this section we explore the energy-dependent transmission through our scaled tight-binding chain. To this end
we attach two leads in the form of semi-infinite discrete chains to the left and right of our SC. Determining the
transmission follows then the standard formalism of wave function matching <cit.> which folds the infinite
leads into the SC and extends it by a single site to the left and to the right.
In the language of Greens functions this corresponds to the inclusion of the corresponding self-energy
into the Hamiltonian matrix problem: we map the closed system eigenvalue problem to a linear system
of equations with an inhomogeneity <cit.>. Note that the transmission is obtained
as the absolute value squared of the n+1-st element of the vector obtained from the resulting linear equations.
As a result of the above procedure we have now the parameters ϵ_L, t_L, ϵ_R, t_R, t_LD,t_DR
and t, ϵ_A, ϵ_B. Here, L,R stand for the corresponding quantities in the left
and right lead: ϵ being the on-site energies and t refering to the off-diagonal couplings,
respectively. t_LD is the coupling of the left lead to the scattering chain and t_DR is the
coupling of the scattering chain to the right lead, respectively. As done before we will use
ϵ_A=1.0, ϵ_B=2.0.
The leads possess due to their semi-infinite structure a continuum of k-values.
We focus on the case t_L/R = 1.0, ϵ_L/R = 1.0, t_LD = 1.0, t_DR = 1.0.
As a consequence the energy accessible in the left and right leads correspond to
E = ϵ_L/R + 2t_L/Rcos(k_L/R). Therefore we have k ∈ [0, 2 π ]
and the lead energy window is E ∈ [-1.0,3.0]. We will then vary the coupling t of the SC
in the range [0.1,50.0] and analyze the resulting transmission T(E).
For large values of t the lead energy width will be small compared to the energetical width of
the SC, whereas for small values of t the opposite holds true.
In Fig.<ref>(a-d) we show transmission spectra for the values
t = 50.0, 10.0, 3.0, 1.0. For t=50.0 in Fig.<ref>(a) the energy interval of the SC is
approximately [-99.0,102.0] and, as mentioned above, the lead energy interval is [-1.0,3.0].
Within this energy window the eigenstates of the isolated SC are all completely delocalized, see
Fig.<ref>. In this case the transmission spectrum shows three sharp peaks which are located
approximately at the energies E = 0.0, 1.5, 3.0. In between those peaks the transmission reaches
approximately zero, i.e. the three peaks are well-isolated. This can be explained by projecting the energy window
of the lead onto the stationary energy eigenvalue spectrum of the SC without leads only for the
above parameters: Only three eigenstates located at the energies of the transmission peaks are encountered.
If we now decrease the value of the coupling to t=10.0 (see Fig.<ref>(b)), we have an energy
window of approximately [-19.0,22.0] for the SC, and we observe 14 distinct peaks.
In between this dense sequence of well-isolated peaks the transmission does not completely
decrease to the value zero due to the finite overlap of the resonances.
As a result the transmission peaks 'live' on a background of low transmission.
Again, this series of transmission peaks can be understood by projecting the energy window of the lead
E ∈ [-1.0,3.0] onto the energy eigenvalue spectrum of the SC for t = 10.0. The corresponding eigenstates of
the SC without leads are assignable to the transmission one peaks and are of completely delocalized character.
The above trend continues with e.g. for t = 5.0 28 peaks (not shown) being present in the transmission
spectrum. The background transmission increases consequently. Also for t = 3.0 (see Fig.<ref>(c))
we address with the lead energy window only delocalized states. The transmission spectrum shows still a series of distinct
peaks, but with a larger more irregular appearing background formed by the overlaping resonances.
For t = 1.0 (see Fig.<ref>(d)) the transmission profile has changed qualitatively. Now, the lead
energy window [-1.0,3.0] covers a large part of the energy eigenvalue spectrum and the corresponding
states are for low energy purely localized whereas for E ≳ 0.2 they become delocalized and
contribute to the highly irregular transmission profile. The delocalized part of the eigenstates
ends at E ≈ 3.0 which is also the end of the transmission energy window. While the transmission
spectrum is irregular, there is, in many cases, a clear assignment to the behavior of the inverse
participation ratio: low values of the IPR correspond to delocalization which leads to high transmission values.
For t=0.5 (not shown here) the transmission profile is even more fragmented into irregular series
of narrow peaks and finally, for t=0.3, the transmission is essentially zero in the complete spectral
window.
§ SUMMARY AND CONCLUSIONS
The concept of local symmetry dynamics provides a systematic pathway of generating lattices covered with
overlapping local symmetries. While this pathway has very recently been pursued to show that the class of so-called n:m
rules provide us with emergent periodic behavior <cit.>, we show here that the local symmetry dynamics according
to the rule n,(n+1),(n+2),(n+3),... leads (for n=2) to a scaling behavior. The resulting chain consists therefore
of the concatenation of alternating subchains of increasing lengths.
Mapping the scaled chain onto a tight binding Hamiltonian the focus of the present work has been on the analysis
of the spectral and transmission properties of this Hamiltonian. The energy eigenvalue spectrum shows with
increasing strength of the off-diagonal coupling a transition from two to three and finally to a single
branch. A closer look at the spectrum reveals minigaps accompanied by a characteristic accumulation of eigenvalues
in their neighborhood. The eigenvalue spacings exhibit a crossover from the case of a
two to a three and finally a single humped distribution. The cusps connecting the different branches go along
with sharp peaks in the corresponding density of states. Many of the features occuring for the eigenvalue spectrum
could be explained and understood via a local resonator model which applies for weak to intermediate values
of the coupling and which treats the subchains possessing the same on-site energies as independent resonators.
The spectrum and eigenstates of the complete chain are then composed of the independent resonator spectra and
eigenstates. Indeed, an eigenstate analysis via so-called eigenstate maps demonstrates the localization of
the eigenstates in terms of resonator eigenmodes for weak couplings. The transition to delocalization with
increasing coupling strengths occurs in an expanding manner starting from the center of the spectrum and has been analyzed
via the corresponding behavior of the inverse participation ratio. Finally, we have investigated the transmission
properties of the scaled chain and found, with decreasing coupling, a transition from the case of a few regularly arranged
isolated transmission one peaks to the case of many such peaks on an enhanced background. Finally, for
sufficiently small coupling strengths, the transmission profile becomes highly irregular and probes the
complete delocalized eigenstate portion of the scaled chain.
The presently investigated case of a scaled chain is certainly only a specific case out of many possible
local symmetry dynamics generated chains which are asymptotically non-periodic. Indeed, one can think of
many different ways of modifying the applied rule such that a rich interplay of scaled subchains with
intermediate ones appear. The common feature of all of those chains is the presence of the extensive local
reflection symmetries of overlapping character. It is left to future investigations to possibly arrive at
a classification of those local symmetry dynamics generated lattices and the physical properties of their
resulting tight binding realizations.
Finally let us briefly address the possibility of an experimental realization of the scaled tight-binding
chain. A promising optics-based platform are evanescently coupled waveguides (see ref.<cit.> and
in particular the references therein). Due to the extended control of the underlying optical materials
the propagation of light and the coupling among the waveguides can be varied in a wide range and the
dynamical evolution has many commons with the corresponding time evolution for a single particle
quantum system. The coupling among the waveguides can be encoded into the bulk material using
femtosecond laser pulses. A limiting factor for the current application to the scaled chain is certainly
the number of waveguides accessible which would correspondingy
limit the largest possible resonator in the chain.
§ ACKNOWLEDGMENTS
This work has been supported by the Cluster of Excellence “Advanced Imaging of Matter” of the Deutsche
Forschungsgemeinschaft (DFG)-EXC 2056, Project ID No. 390715994.
99
Sethna03 J.P. Sethna, 1991 Lectures in Complex Systems, Eds. L. Nagel and D. Stein, Santa Fe Institute Studies
in the Sciences of Complexity, Proc. Vol. XV, Addison-Wesley 1992.
EdNatPhys The rise of quantum materials, Editorial, Nature Physics 12, 105 (2016).
Gross96 D.J. Gross, The role of symmetry in fundamental physics, Proc.Nat.Acad.Sc. 93, 14256 (1996).
Ashcroft76 N.W. Ashcroft and N.D. Mermin, Solid State Physics, Holt-Saunders, 1976.
Singleton01 J. Singleton, Band Theory and Electronic Properties of Solids, Oxford Master Series
in Condensed Matter Physics, Oxford University Press 2001.
Macia09 E. Maciá Barber, Aperiodic Structures in Condensed Matter, Fundamentals and Applications,
Series in Condensed Matter Physics, CRC Press 2009.
Macia21 E. Maciá-Barber, Quasicrystals, Fundamentals and Applications, CRC Press 2021.
Shechtman84 D. Shechtman, I. Blech, D. Gratias, and J. W. Cahn, Metallic phase with long-range orientational
order and no translational symmetry, Phys.Rev.Lett. 53, 1951 (1984).
Suck02 J.B. Suck, M. Schreiber, P. Häussler, Quasicrystals: An Introduction to Structure,
Physical Properties and Applications, Springer Science & Business Media 2002.
Morfonios14 C. Morfonios, P. Schmelcher, P.A. Kalozoumis and F.K. Diakonos, Local symmetry dynamics in
one-dimensional aperiodic lattices: a numerical study, Nonl.Dyn. 78, 71 (2014).
Kalozoumis14a P.A. Kalozoumis, C. Morfonios, F.K. Diakonos and P. Schmelcher,
Invariant of broken discrete symmetries, Phys.Rev.Lett. 113, 050403 (2014).
Kalozoumis14b P.A. Kalozoumis, C. Morfonios, F.K. Diakonos and P. Schmelcher,
Local symmetries in one-dimensional quantum scattering, Phys.Rev.A 87, 032113 (2013);
P.A. Kalozoumis, C. Morfonios, N. Palaiodimopoulos, F.K. Diakonos and P. Schmelcher,
Local symmetries and perfect transmission in aperiodic photonic multilayers, Phys.Rev.A 88, 033857 (2013).
Morfonios17 C. Morfonios, P.A. Kalozoumis, F.K. Diakonos and P. Schmelcher,
Nonlocal discrete continuity and invariant currents in locally symmetric effective Schrödinger arrays, Ann.Phys. 385, 623 (2017).
Schmelcher17 P. Schmelcher, S. Krönke and F.K. Diakonos,
Dynamics of local symmetry correlators for interacting many-particle systems, J.Chem.Phys. 146, 044116 (2017).
Zambetakis16 V.E. Zambetakis, M.K. Diakonou, P.A. Kalozoumis, F.K. Diakonos, C.V. Morfonios, P. Schmelcher,
Invariant current approach to wave propagation in locally symmetric structures, J.Phys.A 49, 195304 (2016).
Kalozoumis15 P.A. Kalozoumis, O. Richoux, F. K. Diakonos, G. Theocharis, and P. Schmelcher,
Invariant currents in lossy acoustic waveguides with complete local symmetry, Phys.Rev.B 92, 014303 (2015).
Schmitt20 N. Schmitt, S. Weimann, C.V. Morfonios, M. Röntgen, M. Heinrich, P. Schmelcher and A. Szameit,
Observation of Local Symmetry in Photonic Systems, Las.Phot.Rev. 14, 1900222 (2020).
Morfonios20 C.V. Morfonios, M. Röntgen, F.K. Diakonos and P. Schmelcher,
Transfer efficiency enhancement and eigenstate properties in locally symmetric disordered finite chains, Ann.Phys. 418, 168163 (2020).
Roentgen19 M. Röntgen, C.V. Morfonios, R. Wang, L. Dal Negro and P. Schmelcher,
Local symmetry theory of resonator structures for the real-space control of edge states in binary aperiodic chains,
Phys.Rev. B 99, 214201 (2019).
Schmelcher23 P. Schmelcher, Evolution of discrete symmetries, arXiv:2303.14150v1.
Goringe97 C.M. Goringe, D.R. Bowler and E. Hernández,
Tight-binding modelling of materials, Rep.Progr.Phys. 60, 1447 (1997).
Kouachi06 S. Kouachi, Eigenvalues and eigenvectors of tridiagonal matrices, El.J.Lin.Alg. 15, 115 (2006).
Kulkarni99 D. Kulkarni, D. Schmidt and S.-K. Tsui, Eigenvalues of tridiagonal pseudo-Toeplitz matrices,
Lin.Alg.Appl. 297, 63 (1999).
Willms08 A.R. Willms, Analytic results for the eigenvalues of certain tridiagonal matrices,
Siam J.Matrix Anal.Appl. 30, 639 (2008).
Zwierzycki08 M. Zwierzycki et al, Phys.Stat.Sol. 245, 623 (2008).
Datta95 S. Datta, Electronic Transport in Mesoscopic Systems, Cambridge University Press 1995.
Ferry97 D.K. Ferry and S.M. Goodnick, Transport in Nanostructures, Cambridge University Press 1997.
Szameit10 A. Szameit and S. Nolte, Discrete optics in femtosecond-laser-written photonic structures,
J. Phys. B 43, 163001 (2019).
|
http://arxiv.org/abs/2307.07470v1 | 20230714165128 | Charm-Meson $t$-channel Singularities in an Expanding Hadron Gas | [
"Eric Braaten",
"Roberto Bruschini",
"Li-Ping He",
"Kevin Ingles",
"Jun Jiang"
] | hep-ph | [
"hep-ph"
] |
#⃗1#1
|
http://arxiv.org/abs/2307.04342v1 | 20230710045252 | Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays | [
"Kangheun Kim",
"Fan Yang",
"Klaus Mølmer",
"Jaewook Ahn"
] | quant-ph | [
"quant-ph",
"cond-mat.quant-gas",
"physics.atom-ph"
] |
These authors contributed equally to this work
Department of Physics, KAIST, Daejeon 34141, Republic of Korea
These authors contributed equally to this work
Center for Complex Quantum Systems, Department of Physics and Astronomy, Aarhus University, DK-8000 Aarhus C, Denmark
Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark
[email protected]
Department of Physics, KAIST, Daejeon 34141, Republic of Korea
Strong mutual interactions correlate elementary excitations of quantum matter and plays a key role in a range of emergent phenomena <cit.>, from binding and condensation <cit.> to quantum thermalization and many-body localization <cit.>. Here, we employ a Rydberg quantum simulator to experimentally demonstrate strongly correlated spin transport in anisotropic Heisenberg magnets, where the magnon-magnon interaction can be tuned two orders of magnitude larger than the magnon hopping strength. In our approach, the motion of magnons is controlled by an induced spin-exchange interaction through Rydberg dressing <cit.>, which enables coherent transport of a single Rydberg excitation across a chain of ground-state atoms. As the most prominent signature of a giant anisotropy, we show that nearby Rydberg excitations form distinct types of magnon bound states, where a tightly bound pair exhibits frozen dynamics in a fragmented Hilbert space, while a loosely bound pair propagates and establishes correlations beyond a single lattice site. Our scheme complements studies using resonant dipole-dipole interactions between Rydberg states, and opens the door to exploring quantum thermodynamics with ultrastrong interactions and kinetic constraints <cit.>.
Realization of an extremely anisotropic Heisenberg magnet in Rydberg atom arrays
Jaewook Ahn
August 12, 2023
================================================================================
Quantum simulation of spin models has established a powerful tool for unraveling exotic many-body phases and dynamics <cit.>. As a pivotal process in quantum magnetism, the quasiparticle spin excitations (magnons) can propagate through the system by coherent spin exchanges that conserve the total magnetization <cit.>. The inclusion of strong magnon-magnon interaction complicates the underlying spin transport, where the motion of different magnons cannot be separated <cit.>. Similar correlated transport dynamics has been observed in various quantum systems, including ultracold atoms engineered by the superexchange mechanism <cit.>, trapped atomic ions with phonon mediated spin-spin couplings <cit.>, and Rydberg atom arrays subjected to resonant dipole-dipole interactions <cit.>. These works aim to construct a spin-1/2 Heisenberg model, where the correlations can be tuned by the anisotropy of the XXZ-type Hamiltonian, defined as the strength of the magnon-magnon interaction relative to the spin-exchange rate.
One of the biggest challenges in previous experiments was to acquire a very large anisotropy, for which the strongly correlated dynamics is constrained to flip-flops that conserve not only the total magnetization but also the number of domain-walls. This kinetic constraint is key to exotic non-ergodic dynamics, such as Hilbert space fragmentation <cit.> and quantum many-body scars <cit.>. In this work, we demonstrate an approach that can access such an extremely anisotropic regime on a neutral-atom quantum simulator, where ground-state atoms are off-resonantly dressed to a Rydberg state to induce an effective excitation exchange <cit.>. As evidence of the large anisotropy, we show that the propagation of a single Rydberg excitation significantly slows down in the presence of a nearest-neighbor Rydberg excitation, due to the formation of a tightly bound state. While similar magnon bound states have been identified in systems with short-range interactions <cit.> or moderate anisotropies <cit.>, the large long-range anisotropy in our work can further support a new type of bound states with a bond length beyond the nearest neighbor.
Effective spin exchange in a Rydberg Ising model
Our experiments are carried out in a chain of ^87 Rb atoms initially trapped in an optical tweezer array [see Fig. <ref>(a)]. We use a two-photon excitation scheme to couple the ground state |↓⟩=|5S_1/2,F=2,m_F = 2⟩ to the Rydberg state |↑⟩=|71S_1/2,m_J=1/2⟩, which maps the system onto a spin-1/2 chain described by a tilted Ising Hamiltonian (taking ħ=1, where ħ is the reduced Planck constant),
Ĥ_ Ryd = Ω/2∑_i σ̂_i^x - Δ∑_i n̂_i + 1/2∑_i≠ jV_ijn̂_i n̂_j.
Here, σ̂_i^α are Pauli matrices, n̂_i = |r_i⟩⟨ r_i|=(1+σ̂_i^z)/2 denotes the Rydberg-state projector, and Ω and Δ are the Rabi frequency and the detuning of the two-photon transition, respectively. The interaction strength V_ij between Rydberg atoms at sites i and j takes the form V_ij=C_6/r_ij^6, where r_ij is the distance between the atoms and C_6>0 is the van der Waals (vdW) coefficient.
To understand the dynamics of this Rydberg Ising model, we decompose the original Hamiltonian into Ĥ_ Ryd = Ĥ_0 + Ω̂_D, where Ĥ_0 is the diagonal part, and Ω̂_D=(Ω/2)∑_iσ̂_i^x is the off-diagonal driving term that can create or annihilate a single Rydberg excitation. If we label the eigenstates of Ĥ_0 according to the total Rydberg excitation number 𝒩̂_R=∑_i n̂_i, then Ω̂_D only couples states where 𝒩̂_R changes by one. As a result, the coupling usually admixes different 𝒩̂_R subspaces. However, if the energy difference between adjacent blocks of Ĥ_0 is much larger than the coupling strength Ω, these subspaces become dynamically decoupled, and only states of the same 𝒩̂_R are coupled with each other via a perturbation process. This perturbation effect occurs predominantly at the second order and can be described by an effective Hamiltonian Ĥ_eff (see Methods), which has a U(1) symmetry corresponding to the conserved Rydberg excitation number 𝒩̂_R. Figure <ref>(b) visualizes the perturbation process for two atoms, where states |↑↓⟩ and |↓↑⟩ are coupled by a spin-exchange interaction J(σ̂_1^+σ̂_2^-+σ̂_1^-σ̂_2^+) between the ground state and the Rydberg state, with σ̂^±_n=(σ̂^x_n ± iσ̂^y_n)/2. Crucially, the nonvanishing interaction strength J = Ω^2 V_12/4Δ(Δ-V_12) is enabled by unequal energy differences between adjacent 𝒩̂_R sectors. These nonuniform level spacings arise from the vdW interaction and can lead to complicated density-dependent spin exchanges. For example, in a three-atom chain with the central site excited to the Rydberg state [see Fig. <ref>(c)], the spin exchange between the first and the third atom is described by a three-body interaction term Q(σ̂_1^+σ̂_3^-n̂_2+σ̂_1^-σ̂_3^+n̂_2), where Q = Ω^2 V_13 /4(Δ-V_12)(Δ - V_12 - V_13) is the density-dependent coupling strength.
To observe these virtual spin-exchange processes, it is preferable to work in the weak dressing regime Ω≪|Δ|, which, however, results in weaker interaction strengths. Concerning this trade-off, which could be relaxed by a larger Rabi frequency, our experiments are typically performed with |Δ/Ω|∈ [1.5,4]. In this intermediate regime, we demonstrate that the U(1) symmetry is largely preserved and the deviation from the effective theory can be suppressed by a postselection measurement. Actually, we can accurately count Rydberg excitations in each experimental run by single-site resolved fluorescence imaging, which projects the spins to an exact microstate. Therefore, when exploring the dynamics of a specific 𝒩̂_R subspace, events subject to processes breaking the U(1) symmetry can be discarded, while only states remaining in the given symmetry sector are retained <cit.>. This postselection scheme has a high success probability and shows good tolerance to imperfect state initialization.
Quantum walk of a single magnon
We first investigate the dynamics within the 𝒩̂_R=1 subspace of a single Rydberg excitation (magnon). The effective Hamiltonian for this symmetry sector is a simple XY model describing coherent hopping of a single magnon: Ĥ_ eff = ∑_i< j J_ij (σ̂_i^+ σ̂_j^- + σ̂_i^- σ̂_j^+) + ∑_i μ_i n̂_i, where J_ij= Ω^2 V_ij/4Δ(Δ-V_ij) is the rate of the effective spin exchange, and μ_i = -Δ +2δ +∑_j≠ iJ_ij is the on-site potential of the magnon with δ=Ω^2/4Δ.
As a minimal yet nontrivial example, we begin with two sites and measure the spin-exchange process |↓↑⟩↔|↑↓⟩. To this end, two atoms are loaded into the tweezers and prepared in state |↓↓⟩ via optical pumping. Then, the trap is turned off, and the first atom is addressed with a 820-nm laser, making it off-resonant with respect to the transition driven by the global Rydberg beam. The second atom is on-resonant and subsequently driven to the Rydberg state by a π-pulse, creating the desired initial state |↓↑⟩. After that, the global Rydberg beam is significantly detuned to induce the effective spin exchange. The experimental sequence is shown in Fig. <ref>(d), and more details can be found in Refs. <cit.>. Figure <ref>(e) depicts the characteristic oscillation dynamics measured with Ω = 2π× 1.52 MHz, Δ = 2π× 5 MHz, and r=4.95 μ m, where r is the interatomic distance. It is clearly seen that the oscillation is approximately U(1) symmetric, as it mainly occurs in the single-excitation subspace, while states |↓↓⟩ and |↑↑⟩ are rarely populated. The oscillation frequency ∼ 0.80 MHz drawn from the experiment agrees well with the perturbation analysis that gives |J| ≈ 0.78 MHz. Here, the damping of the coherent spin exchange is mainly caused by uncorrelated dephasings from the intermediate-state scattering, and the scheme is intrinsically robust against correlated dephasings from the laser phase noise.
We next measure the distance dependence of the interaction J_ij=J(r_ij) by varying the distance r between the two atoms. As shown in Fig. <ref>(f), the measured potential perfectly matches the theoretical prediction J_±(r) = δ/[(r/r_c)^6∓1], where ± denotes the sign of the detuning, and r_c = (C_6/|Δ|)^1/6 is a characteristic length. For a negative detuning (Δ<0), J_-(r) is a soft-core potential that plateaus at δ for r<r_c and decays with a vdW tail ∼ 1/r^6, similar to the Rydberg-dressing induced interaction between ground-state atoms <cit.>. The potential for a positive detuning (Δ>0) has a distinct behavior: while it has the same plateau value and asymptotic scaling, J_+(r) diverges at r=r_c. This singularity is caused by the facilitation dynamics, where the condition V_i,i+1=Δ makes single-magnon states resonantly coupled with the two-magnon state |↑↑⟩, leading to a breakdown of perturbation theory and the U(1) symmetry. In the facilitation regime, it has been shown previously that a small thermal fluctuation of atomic positions can lead to a strong Anderson localization, hindering the transport of the excitation <cit.>. In contrast, for the U(1) symmetric regime studied in this work, the plateau of the potential makes the dynamics insensitive to the fluctuation of interatomic distance, and a magnon is expected to be highly delocalized.
To demonstrate that the magnon can exhibit robust quantum walk against atomic positional disorders, we now create a larger array containing 7 atoms with a spacing of 4.95 nm. In order to prepare the initial state |↓↓↓↑↓↓↓⟩, we apply the individual addressing beam to shift the detuning of the central site, followed by an adiabatic ramping of the global Rydberg beam, which only drives the atom at the center to the Rydberg state [Fig. <ref>(a)]. After the initialization, the addressing beam is turned off, and a red-detuned (Δ<0) Rydberg driving field is applied to induce the effective dynamics. The propagation of the initial excitation can be traced by observing the evolution of the local Rydberg density ⟨n̂_i⟩, as shown in Fig. <ref>(b), where an approximate light-cone wavefront can be identified. The staggered pattern of ⟨n̂_i⟩ during the evolution is a clear evidence of the quantum interference [Fig. <ref>(c)], as opposed to the Gaussian distribution in a classical random walk. In the current system, the existence of uncorrelated dephasings will eventually destroy the coherence of the system and leads to a uniform steady distribution. To quantify the role of the dephasing, we extract the mean square displacement ⟨ x^2 ⟩ of the magnon [Fig. <ref>(d)], and find good agreement with the simulations based on the Haken-Reineker-Strobl (HRS) model <cit.>, which includes both coherent magnon hoppings and on-site dephasings (with a rate γ=2π× 0.2 MHz). For a larger system, the HRS model predicts that the magnon will continue to spread with no steady-state distribution, but its motion has a quantum-classical crossover: while the initial propagation for t<1/γ is governed by a ballistic transport (⟨ x^2 ⟩∝ t^2), the spreading will gradually become diffusive with ⟨ x^2 ⟩∝ t. Such a scaling crossover can be identified in future experiments with increased system size.
Dynamics of magnon bound states
Having explored the single-magnon dynamics, we proceed to the observation of correlated motions of multiple magnons. In the two-excitation subspace (𝒩̂_R=2), neglecting the essentially uniform on-site potential, the effective Hamiltonian now reads
Ĥ_eff = ∑_i<j≠ kQ_ijk(σ̂_i^+ σ̂_j^-n̂_k + n̂_kσ̂_i^-σ̂_j^+) + ∑_i<j U_ijn̂_i n̂_j,
where Q_ijk = (G_ijk+G_jik)/2 is the density-dependent hopping strength with G_ijk = Ω^2 V_ij /4(Δ-V_ik)(Δ - V_ik - V_ij), and U_ij=V_ij-4J_ij+ ∑_l≠ i,j(G_lij-J_li) denotes the density interaction between magnons. Note that the density interaction U_ij∼ V_ij is mainly from the zeroth-order Hamiltonian Ĥ_0, while the exchange interaction Q_ijk is induced by the second-order perturbation. This leads to an important characteristic that |U_ij/Q_ijk|∼ (2Δ/Ω)^2≫1, which makes Eq. (<ref>) a long-ranged, highly anisotropic Heisenberg model.
One direct consequence of this large anisotropy is the emergence of a family of magnon bound states. In an infinite spin chain, the two-magnon eigenstate |ψ_K⟩=∑_i≠ jψ_K(i,j)σ̂_i^+σ̂_j^+|↓↓⋯↓⟩ can be labeled by the center-of-mass momentum K, where the wavefunction can be factorized as ψ_K(i,j) = e^iKRϕ_K(r) by introducing the center-of-mass position R = (i+j)/2 and the relative distance r= i-j <cit.> . The bound state has a bounded wavefunction ϕ_K(∞)→0, whose energy is isolated from the scattering continuum. Therefore, systems initially in the bound state remain localized in the relative coordinate, in stark contrast to the scattering state, where individual excitations propagate freely. Figure <ref>(a) shows the energy spectrum and the bound-state wavefunction for a typical parameter Δ/Ω=-3 and V_i,i+1/Δ=-8. The extremely large nearest-neighbor (NN) anisotropy ξ_1=U_i,i+1/Q_i-1,i,i+1≈ 684 in this case gives rise to a high-energy bound state (red curve), where magnons are tightly bounded at a relative distance r=1 (nearest neighbors) for all momenta. The strong density interaction also has a significant long-range effect absent in a short-range interacting system <cit.>: the next-nearest-neighbor (NNN) anisotropy ξ_2=U_i,i+2/Q_i-1,i,i+2≈ 4 is also quite large, and can thus support a low-energy loosely bound state (blue curve), whose wavefunction ϕ_K(r) has a larger bond-length r>1. We will focus on these two types of bound pairs in the experiment, and expect that the same system gives rise to further varieties of bound states at larger anisotropy or in different lattice configurations.
To probe the correlated dynamics of the tightly bound Rydberg pair, we prepare an initial state |↓↓↑↑↓↓⟩ in a 6-atom chain via an adiabatic anti-blockade excitation scheme, where the detuning for the center two atoms are swept across the resonant point Δ=V_i,i+1/2. We then quench the system to a fixed detuning and measure the evolution of the two-site correlator Γ_ij=⟨σ̂_i^+ σ̂_j^+ σ̂_i^- σ̂_j^-⟩. For a postive detuning Δ=2π× 12 MHz, the observed correlation function propagates almost perfectly along the directions j=i±1 [see the upper panels of Fig. <ref>(c)], demonstrating that two Rydberg excitations move in a correlated manner as expected [see Fig. <ref>(b)]. In fact, the large NN anisotropy ξ_1≈ -35 in our experiment makes the total NN-Rydberg bonds 𝒩̂_RR=∑_i n̂_in̂_i+1 another conserved charge. The tightly bound Rydberg pairs constitute the symmetry sector (𝒩̂_R=2, 𝒩̂_RR=1), whose dynamics are governed by an NNN hopping term Q∑_i (σ̂_i^+ σ̂_i+2^-n̂_i+1 +H.c.). Here, the strength Q=Q_i,i+2,i+1 corresponds to the exchange process illustrated in Fig. <ref>(c), and determines the propagation speed of the tightly bound pair. To further confirm this analysis, we turn the detuning to a negative value Δ=2π×-3.3 MHz, with which the single-magnon hopping strength J=J_i,i+1 remains unchanged, but the density-dependent hopping is significantly reduced (Q=0.13 MHz→ 0.01 MHz). Consistent with the theoretical prediction, the dynamics of the system becomes almost frozen within the time scale T∼ 2π/J [see the lower panels of Fig. <ref>(c)], at which a single Rydberg excitation should already spread over the lattice. Note that the slight spreading of the correlator at late time is mainly caused by the imperfect state initialization rather than by excitation hopping. The frozen dynamics observed here is a clear signature of the Hilbert space fragmentation: while all tightly bound states |⋯↑_i↑_i+1⋯⟩ share the local symmetry (𝒩̂_R and 𝒩̂_RR), they form dynamically disconnected Krylov subspaces of dimension 1 (frozen states). In fact, taking only NN vdW interactions into consideration (in accordance with a vanishing NNN hopping strength Q), the effective Hamiltonian can be mapped to a folded XXZ model <cit.>, where spin exchanges are constrained by the conservation of 𝒩̂_RR, leading to a strongly fragmented Hilbert space in the thermodynamic limit.
Unlike the tightly bound state, which has a nearly flat band in most parameter regimes (corresponding to the frozen dynamics), the loosely bound pair displays a finite bandwidth and is therefore more mobile [Fig. <ref>(a)]. To observe the propagation of this longer-range bound state, we prepare a 7-site chain and excite the third and the fifth atom to the Rydberg level. We first choose a small lattice spacing of 4.95 μ m to achieve large anisotropies ξ_1=539 and ξ_2≈ 1.24, for which the produced initial state |↓↓↑↓↑↓↓⟩ has a considerable overlap (≈ 0.24) with the loosely bound state. The upper panels of Fig. <ref>(e) depicts the evolution of the experimentally extracted correlation function Γ_ij. In contrast to the tightly bound pair, whose transport is determined by an NNN hopping term, the correlated motion of the loosely bound pair is mediated by two successive NN hopping processes [Fig. <ref>(d)], as evident from the predominant spreading of Γ_ij along the directions i=j±2. As a comparison, we then increase the interatomic distance to 8.5 μ m, at which the NNN anisotropy ξ_2≈ -0.52 is too small to support the long-range bound state for most values of the momenta. In this regime, the observed correlator Γ_ij rapidly spreads over the entire zone with no preferred propagation direction [see the lower panels of Fig. <ref>(e)], which suggests that the two Rydberg excitations are not bounded to each other but propagate freely <cit.>.
To further confirm the existence of the bound state, we extract their participation ratios (BR) from the measured correlation map, where the ratios for the tightly bound state and the long-range bound state are defined as BR_1 = ∑_iΓ_i,i+1/Γ_tot and BR_2 = ∑_iΓ_i,i+2/Γ_tot, respectively, with Γ_tot = ∑_i<jΓ_ij. For the system size realized in our experiment, the reflection from the boundary can lead to a finite BR_1 and BR_2 even in the absence of magnon interactions. To estimate this finite-size effect and get a lower reference value for the participation ratio, we assume a uniform thermal distribution of the magnons with Γ_ij=1/Γ_tot. As confirmed by Fig. <ref>, the measured ratio is much larger than this lower bound (dashed curves) during the free-magnon relaxation time ∼ 1/J. Here, the damping of the bound pair at late time is mainly caused by the local dephasing. It is here worth pointing out that an atomic positional disorder may slow down the propagation of bounded magnons more easily than single magnons, because it contributes a large disordered binding interaction U_ij (especially for the tightly bound pair). To account for the decoherence, the positional disorder, as well as other imperfections, we carry out full numerical simulations based on realistic experimental conditions and the original Rydberg Ising model (see Methods). This full simulation agrees very well with the experimental data (see Fig. <ref>) and suggests improving the coherence of the correlated spin-exchange dynamics in future studies.
Conclusions and outlook
In conclusion, we have demonstrated a new approach to constructing the Heisenberg-type spin model in a Rydberg atom array. Different from previous schemes realized by dipolar exchange interaction and Floquet engineering <cit.>, our approach is based on Rydberg dressing of an Ising Hamiltonian, which can offer a large and widely tunable anisotropy. In the current experiment, we focused on the single-magnon and the two-magnon sector. By creating more excitations in a large-scale array, the system may allow exploration of emergent Hilbert space fragmentation <cit.> and the Krylov-restricted thermalization of multiple magnons <cit.>. The scheme also allows dynamical engineering of spin transport, topological pumping protocols and programmable entanglement distributions <cit.>. Generalizations to higher dimension could lead to richer physics. In particular, in a 2D lattice, the inclusion of a multicolor dressing field could enable application of a synthetic gauge flux <cit.>, which can give rise to topologically protected chiral motion of the magnon-bound state and holds promise for observation of a chiral spin liquid <cit.>.
This research was supported by Samsung Science and Technology Foundation (SSTF-BA1301-52) and National Research Foundation of Korea (2017R1E1A1A01074307). F. Yang and K. Mølmer acknowledge the support from Carlsberg Foundation through the “Semper Ardens” Research Project QCooL and from the Danish National Research Foundation (DNRF) through the Center of Excellence “CCQ” (Grant No. DNRF156). We thank L. You, T. Pohl, A. E. B. Nielsen, H. Yarloo, H. Zhang, A. Cooper, and X. Wu for valuable discussions.
b>X
s>=.5X
§ METHODS
§.§ Effective Hamiltonian of the system
The effective U(1) symmetric model can be constructed from the Schrieffer-Wolff (SW) transformation <cit.>. Up to the second-order perturbation, the effective Hamiltonian is given by Ĥ_eff = Ĥ_0 + Ĥ_eff^(2) with
Ĥ_eff^(2)=𝒫̂(1/2[𝒮̂,Ω̂_D])𝒫̂,
where 𝒮̂ is a generator satisfying [𝒮̂,Ĥ_0]+Ω̂_D=0, and 𝒫̂ projects out terms that do not conserve 𝒩̂_R. Formally, the generator can be expressed as
𝒮̂=iΩ/2∑_iσ̂_i^y/Δ - ∑_j≠ iV_ijn̂_j.
It is difficult to get an explicit effective Hamiltonian using the above expression. Therefore, we expand 𝒮̂ in orders of the Rydberg excitation number that can influence the spin flip of a single atom at the i-th site, i.e.,
𝒮̂ = (2i/Ω) δ∑_i σ̂_i^y + (2i/Ω) ∑_i≠ jJ_ijσ̂_i^yn̂_j
+ (i/Ω)∑_i≠ j≠ k(G_ijk-J_ij)σ̂_i^yn̂_jn̂_k +⋯ ,
where δ = Ω^2/4Δ,
J_ij = Ω^2V_ij/4Δ(Δ-V_ij), G_ijk = Ω^2V_ij/4(Δ-V_ik)(Δ-V_ik-V_ij).
The above expansion then leads to an effective Hamiltonian Ĥ_eff^(2) = ℋ̂_1-body+ℋ̂_2-body+ℋ̂_3-body+⋯, where
ℋ̂_1-body = δ∑_iσ̂_i^z,
ℋ̂_2-body = ∑_i≠ jJ_ij/2(σ̂_i^+σ̂_j^- + σ̂_i^-σ̂_j^+ -2σ̂_i^zn̂_j)
ℋ̂_3-body = ∑_i≠ j≠ kG_ijk-J_ij/2(σ̂_i^+ σ̂_j^- + σ̂_i^-σ̂_j^+ -σ̂_i^zn̂_j)n̂_k,
are the one-body self-energy shift, the two-body XXZ-type Hamiltonian, and the three-body XXZ term, respectively. The Hamiltonian can be further simplified by the substitution σ̂_i^z = 2n̂_i -1 in a given state sector. For the single-magnon sector (𝒩̂_R=1), the quadratic term n̂_in̂_j can be neglected, which leads to the XY model given in the main text. For the two-magnon sector (𝒩̂_R=2), the cubic term n̂_in̂_jn̂_k can be discarded, and the resulting Hamiltonian can be mapped to Eq. (<ref>). For a general multi-magnon case, the dynamics is governed by a folded XXZ model exhibiting the HSF <cit.>.
§.§ Experimental setup and procedure
The experimental setup of our system is a Rydberg quantum simulator using a neutral atom array of ^87 Rb atoms, similar to our previous experiments <cit.>. The atomic ensembles are cooled and gathered inside a magneto-optical trap (MOT), while the single atoms are trapped inside a 820-nm optical tweezer array of 1 mK depth and sub-Doppler cooled to ∼ 35 μ K with polarization gradient cooling. Atoms are then optically pumped to |↓⟩ = |5S_1/2,F=2,m_F=2⟩. After the ground state preparation, traps are turned off and the atoms are operated to the Rydberg state |↑⟩ = |71S_1/2,m_J=1/2⟩ with the two Rydberg beams of 780-nm (homemade ECDL) and 480-nm (TA-SHG Pro of Toptica) with two photon transition of intermediate detuning of Δ_I = 2π× 660 MHz from the intermediate state |m⟩ =|5P_3/2,F=3,m_F=3⟩. Quantum operation is performed by a series of Rydberg and addressing laser pulses. After the quantum operation, atoms are trapped again by turning on the optical tweezer, and atoms in the Rydberg states are anti-trapped from the tweezer. The remaining atoms are imaged with the electron-multiplied charged coupled device (EMCCD, iXon Ultra 888 of Andor) by illuminating the imaging beam. By distinguishing the fluorescence of background and trapped atom, we could determine the internal state of each individual atom.
The optical tweezer trap and the addressing beam for the state initialization use the same 820-nm laser drived from Ti:Sapphire oscillator (TiC of Avesta) pumped by a 532-nm laser (Verdi G18 of Coherent). The laser beam passes an acousto-optic modulator (AOM) and is split into zeroth and first order beams. The first order beam is sent to the spatial light modulator (SLM, ODPDM512 of Meadowlark optics), and the optical tweezer array of target and reservoir traps is formed and rearranged with real-time calculation Gerchberg-Saxton weighted (GSW) algorithm with GPU (Titan-X Pascal of NVIDIA). The phase for atom arrays are calculated with a 4 times larger array zero-padded to the initial phase to achieve resolution less than the trap size <cit.>. The zeroth order beam propagates along a different path passing an additional AOM and followed by an acousto-optic deflector (AOD, DTSXY-400-820 of AA Opto-Electronic) which is used to address the target atom. This 820-nm addressing beam is off-resonant to the 5S→ 5P transition, inducing an a.c.-Stark shift to the target-atom Rydberg transition.
The quantum operation is programmed using a delay generator (DG645 of Stanford Research Systems) and an arbitrary waveform generator (AWG, XRF Agile RF Synthesizer of Moglabs), controlling AOMs of both the addressing beams and the Rydberg beams. The sequence is depicted in Fig. <ref>(d) of the main text, and a more detailed one is given in Extended Data Fig. <ref>. The sequence is divided into two parts: an initialization process driving the target atoms to Rydberg states, and the spin-exchange process inducing the many-body quench dynamics. For the two-atom experiment, the initial state is prepared by addressing one of the atoms to make it off-resonant to the Rydberg beams and applying a resonant π pulse to the other atom [see Extended Data Fig. <ref>(a) and Fig. <ref>(c)]. For all other experiments, the target atoms are addressed, and the Rabi frequency Ω and the detuning Δ of the global Rydberg beams are adiabatically swept according to the following sequence: (1) 0 μ s→ 0.1 μ s, (0,Δ_i)→ (Ω_ exp,Δ_i) (2) 0.1 μ s→ 0.9 μ s, (Ω_ exp,Δ_i)→ (Ω_ exp,Δ_f), and (3) 0.9 μ s→ 1 μ s, (Ω_ exp,Δ_f)→ (0,Δ_f) as depicted in Extended Data Fig. <ref>(b), where Ω_ exp is the Rabi frequency used in the spin-exchange step. The values of these parameters are summarized in Extended Data Table <ref>. With the above initialization, the addressed target atom is adiabatically excited to the Rydberg state [see Extended Data Fig. <ref>(d)].
§.§ Experimental parameters and measured values
The experimental parameters are given in the following tables. Extended Data Table <ref> shows the parameters and measured values for the two-atom spin-exchange dynamics, where Δ is the detuning for the spin exchange, r is the distance between the two atoms, Ω is the Rabi frequency, and J is the spin-exchange frequency fitted from each experiment, e.g., from the data in Fig. <ref>(e) of the main text. The vdW interaction strength V = C_6/r^6 is determined by the distance r with C_6 = 2π× 1023 GHz·μ m^-6 corresponding to the Rydberg state |71S_1/2, m_J = 1/2⟩ used in the experiment <cit.>. The values of Ω and J are fitted to the expression P=a+bcos(2π× c× t)×exp(-t/d) with unknowns a, b, c, d and probability P of the initial state, where Ω/2π and J/4π corresponds to c. The errors in r, which is plotted in Fig. <ref>(f) of the main text, has the same value 0.3 μ m for all distances, which is limited by the resolution of the image plane, where the beam waist is about ∼ 1.2 μ m and the resolution is ∼ 0.3 μ m =1.2/4 μ m because of the zero-padding. Extended Data Table <ref> shows the experimental parameters for the rest of the experiments. Here, Ω_ exp is the Rabi frequency for both spin-exchange dynamics experiment and the maximum Rabi frequency for the quantum annealing in the initial state preparation, Δ_A is the detuning applied on the target atom by the addressing beam (two values respectively for the left and the right atom in the two-magnon experiments), Δ_i and Δ_f is the initial and final detuning respectively for the detuning sweep of the state initialization, and Δ_ exp is the detuning for the spin-exchange quench dynamics.
§.§ Experimental imperfections and numerical simulations
Full numerical simulations in Fig. <ref> of the main text take the experimental errors into consideration. Extended Data Table <ref> shows types of experimental imperfections and its treatment in the numerical simulations. The dominant error in the dressing scheme is the uncorrelated individual dephasing mainly due to the spontaneous decay from the intermediate state, vdW interaction fluctuation due to the finite temperature of the atom, as well as the state-measurement error. The collective dephasing mainly induced by the laser phase noise does not have a significant role on the dynamics because of the decoherence-free feature of the effective model <cit.>. Both individual and collective dephasings are treated with the Lindblad master equation dρ/dt = -i [H, ρ ] + ℒ_ ind(ρ) + ℒ_ col(ρ) <cit.>, where the superoperator ℒ_ ind, ℒ_ col denotes the individual (on-site) and the collective phase noise, respectively. The individual dephasing rate γ_ ind≈ 2π× 0.2 MHz was fitted from the three level model of |g⟩, |r⟩ and the intermediate state |m⟩. The collective phase noise was fitted from the single-atom Rabi oscillation by fixing γ_ ind, and its value is γ_ col≈ 2π× 0.4 MHz. The temperature of the atomic thermal motion T_ atom = 34.27(5) μ K was measured using release and recapture method. With the temperature, we could calculate the motional variation of atom with a standard deviation σ_i = √(k_BT/(mω_i^2)) of the position for the trap frequency ω_i. In the simulation, the average effect of such an atomic positional disorder was evaluated with the Monte-Carlo method. The radial and longitudinal position standard variations are σ_r≈ 0.1 μ m and σ_a≈ 0.3 μ m respectively. The detection error was considered similar to <cit.>, where the dominant portion of the conditional error probability P(g|r) is due to the Rydberg decay and the dominant portion of P(r|g) is due to a finite temperature of the atom. The former is calculated with P(g|r) = 1-exp(-t_ trap/t_1), where t_ trap is the time when the trap is turned off, and the Rydberg lifetime t_1=43(15) μ s is measured with an additional Ramsey experiment <cit.>. The latter probability P(r|g)=P_ recap(t_ trap) is obtained from the release and recapture probability curve.
|
http://arxiv.org/abs/2307.04737v1 | 20230710175053 | Constraining Electromagnetic Signals from Black Holes with Hair | [
"Nicole R. Crumpler"
] | astro-ph.HE | [
"astro-ph.HE",
"hep-ph"
] |
Particle production from non–minimal coupling in a symmetry breaking potential transporting vacuum energy
Orlando Luongo
August 12, 2023
==========================================================================================================
§ INTRODUCTION
There is an important distinction between astrophysical and mathematical black holes (BHs). Astrophysically, BHs are observed as compact regions of spacetime in which gravity is so strong that even light cannot escape. These objects have been detected merging with each other <cit.>, emitting electromagnetically-bright jets <cit.>, consuming stars <cit.>, and more. Mathematically, BHs are vacuum solutions of Einstein equations of general relativity describing spacetime external to a compact mass distribution within an event horizon. The theory underlying mathematical BHs has been used to characterize astrophysical BHs, although there remain clear disconnects between these theoretical models and physical reality.
One such disconnect stems from the so-called “no-hair" theorem. Canonically, the “no-hair" theorem proposes that mathematical BHs are completely characterized by the BH's mass, charge, and spin as seen by an external observer. This theorem has been tested astrophysically using a variety of probes such as radio observations of the shadow of a BH event horizon <cit.>, gravitational wave signals of binary BH (BBH) mergers <cit.>, and stellar orbits around the galactic center <cit.>. No evidence for its violation has yet been discovered in astrophysical BHs. However, the “no-hair" theorem leads directly to the BH information paradox <cit.>. In this paradox, different configurations of matter, radiation, etc. that have fallen into a BH can be described by the same mathematical BH solution, losing information about the initial quantum state of the system. This violates a core tenet of quantum mechanics in a regime in which both the theories of general relativity and quantum mechanics are valid. There have been many attempts to resolve this paradox; one can refer to Ref. <cit.> for a recent review. Despite these attempts, no consensus has yet been reached.
The BH information paradox supports a possibility of richer physics underlying BHs. “Hairy" BHs are novel solutions to the Einstein field equations which are characterized by more than the three parameters of a canonical BH. Some of these BH models have been proposed as explicit solutions to the BH information paradox. One interesting possibility, proposed by Ref. <cit.>, is the firewall BH model. This mathematical BH has a singular shell (otherwise known as a firewall) just outside the horizon, causing general relativity to break down outside the BH horizon. Since general relativity no longer holds in this regime, the BH information problem no longer applies. Such an exotic object appears as a Schwarzschild BH to a distant observer, raising the question of how a firewall BH (and, more generally, other “hairy" BHs) might be distinguished from a canonical BH in astrophysical observations.
Electromagnetic (EM) radiation from astrophysical BHs in baryon-poor environments would be a beacon of new fundamental physics and support the existence of non-canonical BHs. Canonical BHs do not directly source EM radiation, except the weak emission of thermal Hawking radiation <cit.>, which is not observable for BHs of astrophysically relevant masses. However, some “hairy" BH models could radiate an appreciable proportion of their mass as EM radiation. We do not propose a specific model for this mechanism, but as a motivating example consider the firewall BH discussed earlier. Ref. <cit.> suggests several ways in which this model could produce EM radiation including the explosion of an unstable firewall, a BH phase transition from a canonical to a firewall BH, and BBH mergers involving a firewall BH. Overall, our understanding of BHs is inconsistent, motivating searches for generic signals of deviations from canonical BH models.
Given that only non-canonical BHs can radiate appreciably, there are two important considerations in order to distinguish this radiation from typical astrophysical sources. Firstly, we must be confident that there is a BH in the region sourcing the radiation. Secondly, the BH must be in a sufficiently baryon-poor environment and the emitted radiation must be sufficiently energetic that we can be confident the radiation is not produced by standard processes such as relativistic jets. The best observable available satisfying these considerations is concurrent observations of BBH mergers with gravitational wave detectors and EM radiation instruments. Gravitational wave detectors such as the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO, <cit.>) and Virgo <cit.> regularly observe the mergers of stellar mass BHs. These events have only been observed extragalactically, with local BBH merger rates measured to be ∼ 10 Gpc^-3 yr^-1 <cit.>. Thus, any observable EM radiation from such events must be extremely energetic, on the order of supernova energies.
There is typically insufficient baryonic matter surrounding BBHs to produce EM radiation observable at these extragalactic distances and energy scales. Some models have been proposed to produce a gamma-ray burst (GRB) during a BBH merger. Most of these models require rapid accretion during the merger <cit.>, necessitating a baryon-rich environment, and all of these models have been contested <cit.>. Another class of model involves charged BHs <cit.>, but the required charge is unreasonably large <cit.>. Given the shortcomings of these models, no extragalactically-observable EM signal is expected from a stellar-mass BBH merger. Thus, in this paper, we investigate the observational signatures of a stellar-mass BH directly releasing some of its mass as EM radiation during a BBH merger as a novel indicator of the existence of “hairy" BHs.
Since BBHs mergers are catastrophic events characterized by short timescales, a large amount of energy could be emitted by the BH in a burst of EM radiation. We parameterize the amount of energy released by ϵ such that ϵ M is the energy emitted directly by the BH in EM radiation into the ambient environment. The characteristic frequency of the EM signal is independent of the “hairy" BH model sourcing the radiation because any such emission directly from the BH must tunnel out of the gravitational well in the same manner as Hawking radiation <cit.>. If the Schwarzschild radius of a BH is r_s, then the characteristic frequency of the radiation emitted by the BH is
f=1/2 r_s=3.3× 10^-17(M/M_⊙)^-1 MeV.
Throughout this paper, we work in natural units where ħ=c=k_B=ϵ_0=1 unless otherwise stated. The frequencies emitted by stellar mass BHs are very low-frequency radio waves. At these frequencies, all of the radiation is absorbed in the interstellar medium, predominantly via free-free absorption by the warm ionized medium <cit.>. Thus, this radiation is not directly observable, but is absorbed and re-emitted as a secondary signal. We calculate the range of M and ϵ for which this secondary signal could be detected. In future work, model-dependent effects will need to be included to augment this generic parameterization. Although there are “hairy" BH models capable of producing EM radiation, no complete model able to make quantitative predictions for this effect exists currently. We hope this paper will motivate others in the field to work through the details of such models.
In this paper, we constrain a broad class of “hairy" BH models using a generic and model-independent EM signal that is characterized by the BH mass (M) and the fraction of that mass that is lost to EM radiation (ϵ). In Section <ref>, we discuss the two phenomenologically distinct cases in which radiation is emitted and derive the critical value of ϵ that separates them. This division is set by the Schwinger limit, above which the BH radiation triggers pair production resulting in a GRB and below which the EM field accelerates ambient charged particles to create an overdensity of cosmic rays. In Section <ref> we characterize the extragalactic observability of a GRB created by the BH radiation to constrain ϵ given the non-detection of GRBs from BBH mergers. In Section <ref> we describe the electron and proton cosmic ray energy spectrum created below the Schwinger limit and discuss the difficulties of observationally constraining ϵ in this less-energetic regime. In Section <ref> we summarize our results.
§ THE SCHWINGER LIMIT
The Schwinger limit dictates the critical value of ϵ separating the two phenomenologically distinct cases in which radiation is emitted for a particular BH mass. This limit, derived from quantum electrodynamics, sets the field strength at which an electric field becomes nonlinear due to the spontaneous production of electron-positron pairs <cit.>. Quantitatively, the Schwinger limit occurs at an electric field strength of ℰ_C=m_e^2/e=0.86 MeV ^2. This corresponds to a field energy density of u_C=ℰ_C^2=0.74 MeV ^4.
This field energy density can be related to ϵ as follows. We assume the radiation from the BH is spread over a volume one wavelength (λ) in thickness outside of the BH Schwarzschild radius. Then, the energy density of the BH radiation is
u=ϵ M/4/3π ((r_s + λ)^3 - r_s^3)
= 3.1×10^9ϵ(M/M_⊙)^-2 MeV^4.
Setting u=u_C and solving for the critical value of ϵ gives
ϵ_C=2.4×10^-10(M/M_⊙)^2.
Therefore, ϵ_C∼ 10^-10-10^-6 for BH masses ranging from 1 to 50 M_⊙. For ϵ>ϵ_C, pair-production dominates and results in a GRB as discussed in Section <ref>. For ϵ<ϵ_C, the EM field accelerates ambient charged particles, creating cosmic ray electrons and protons as discussed in Section <ref>.
§ GAMMA-RAY EMISSION ABOVE THE SCHWINGER LIMIT
Above the Schwinger limit, the energy density of the field is large enough to result in electron-positron pair production <cit.>. Qualitatively, the electron-positron-photon gas thermalizes due to Thompson scattering and expands relativistically as an ideal fluid, creating an object known as a fireball in GRB literature <cit.>. We denote the lab frame of an Earth observer as S and the comoving frame of the fluid as S'.
When the EM radiation is first emitted by the BH, the lab and fluid frames coincide. The initial temperature of the fireball in both S and S' is given by
T_0=(E/V_0 g_0 a)^1/4
where a=π^2/15, E is the energy dumped into a region of volume V_0, and g_0 = 2.75=11/4 is half of the effective degrees of freedom for a plasma consisting of photons, electrons, and positrons in thermal equilibrium <cit.>. Again assuming that the radiation from the BH is spread over a volume one wavelength in thickness outside of the BH Schwarzschild radius, the initial temperature is
T_0=(ϵ M/4/3π((r_s+λ)^3-r_s^3)11/4a)^1/4
=200ϵ^1/4(M/M_⊙)^-1/2 MeV.
Assuming that any remnants of stellar ejecta and envelopes have long dispersed, we neglect any external baryon contributions to the dynamics of the fireball. Thus, the fireball is a relativistic radiation-dominated fluid, which rapidly accelerates to γ≫1 under its own super-Eddington radiation pressure. Because the fireball is created outside the Schwarzschild radius of the BH, the system originates in a region of small curvature. Consequently, general relativistic and gravitational redshift effects can be neglected. Employing the usual relativistic conservation equations of baryon number and energy-momentum from Ref. <cit.> in the limit where γ≫1, yields the following scaling relations for each fluid shell <cit.>
γ(r) ∼(r/R_0)
T'(r) ∼ T_0(r/R_0)^-1∼T_0/γ(r)
where r is the distance from the origin in the lab frame and R_0 is the initial width of the fireball. These relations apply so long as the fireball is ultra-relativistic, radiation-dominated, and opaque due to Thompson scattering. So, as the fireball expands from the origin, the bulk Lorentz factor continues to increase due to the acceleration from the radiation pressure of the fluid. To first order in γ, the width of the fireball in the lab frame is constant R(r)= R_0 <cit.>. This requires the width in the comoving frame to increase as R'(r)= γ(r)R_0, illustrating why the fireball cools in its co-moving frame. In the lab frame, the temperature is blue-shifted by
T(r)=γ(r)T'(r)= T_0
since the fluid is moving relativistically towards the observer. Thus, so long as the scaling relations apply, a lab observer sees each shell of the fireball at the same constant temperature, T_0.
Within each fluid shell, the number density of electron-positron pairs in the comoving frame decreases as the fireball cools. Eventually, the process of pair creation and annihilation freezes out when the time for a positron to annihilate with an electron is of the same order as the dynamical time. This occurs at a comoving temperature of T'∼ 20 keV <cit.>. At this temperature, the proportion of the initial energy from the BH contained in the remaining electron-positron pairs is negligible <cit.>. So, nearly all of the initial ϵ M is contained in photons that had been trapped in the fluid by Thompson scattering. When pair production freezes out, the Thompson opacity decreases dramatically and these photons escape. Since the comoving temperature depends only on r, each shell experiences this freeze out as it moves through the same radius in the lab frame. Thus, the characteristic time delay between when photons free-stream from the inner and outermost edges of the fireball is given by <cit.>
δ t∼R_0/c=2.0×10^-5(M/M_⊙) s
since the initial radius of the fireball is set by the wavelength of the BH radiation, λ. These timescales are short enough to be consistent with a short GRB, <O(1 second).
When the photons are able to free stream from the fireball, an observer on Earth sees a nearly thermal black body spectrum at temperature T_0 radiating from the BH <cit.>. The black body spectrum in photon number as a function of photon frequency and the fireball temperature is given by
B_f(T_0)=2f^2/e^2π f/T_0-1 MeV^2 s^-1 Hz^-1 sr^-1.
The peak photon energy for this spectrum is
E_peak=1.6 T_0=320ϵ^1/4(M/M_⊙)^-1/2 MeV.
These peak energies are plotted in Figure <ref> as a function of BH mass and ϵ. These are the most likely photons to be emitted from the fireball, and all peak energies are gamma rays (energy >1 MeV).
Since nearly all the energy from the fireball is converted into photons at the peak energy, the number of photons emitted is
N_γ∼ϵ M/E_peak∼ 3.4×10^57ϵ^3/4(M/M_⊙)^3/2.
Over the full range of photon energies shown in Figure <ref>, the most sensitive telescope is the Fermi Gamma-ray Space Telescope. The two instruments onboard Fermi are the Large Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). The LAT observes photon energies in the range 20 MeV-300 GeV with a sensitivity of 10^-4 erg cm^-2 <cit.>. The GBM observes photon energies in the range 8 keV-40 MeV with a sensitivity of 0.5 ph cm^-2 s^-1 <cit.>. For the gamma-ray energies emitted by 1-50 M_⊙ BHs over a timescale of ≲ 1 second, Fermi's sensitivity limits requires a flux of F≳ 1 ph cm^-2. This minimum flux can be used to calculate the maximum distance, d, to which EM emission by a BH above the Schwinger limit is observable.
d=√(N_γ/4π F)= 5360ϵ^3/8(M/M_⊙)^3/4 Mpc
These distances are plotted in Figure <ref>. For a given distance, we can solve for the minimum ϵ for which Fermi could observe such an event extragalactically.
ϵ_min=2×10^-5(d/100 Mpc)^8/3(M/M_⊙)^-2
The Gravitational-wave (GW) Transient Catalog lists events detected by LIGO <cit.>, Virgo <cit.>, and the Kamioka Gravitational Wave Detector (KAGRA, <cit.>). Most BBH merger candidates listed in the catalog are posted on NASA's General Coordinates Network, which contains both concurrent and follow-up EM observations of GW triggers by the Fermi GBM. The Fermi GBM has an 8 steradian field-of-view <cit.>, which is large enough to cover the full LIGO/Virgo 90% confidence GW localization region if the telescope is well-aligned at the time of the trigger. We cross-reference all BBH mergers in the GW Transient Catalog with Fermi observations from the General Coordinates Network. For BH masses ranging from 10 to 50 M_⊙ in intervals of 10 M_⊙, we identify the nearest BBH merger for which Fermi observed at least 90% of the localization region and recorded no GRB event. The GW events constraining each mass interval are listed in Table <ref>. These non-detections are used to constrain ϵ for each BH mass. To do this, we assume the furthest distance (luminosity distance + error bar) measured by LIGO/Virgo and calculate the minimum value of ϵ needed such that the event would be observable with the Fermi GBM for the observed BH mass. All constrained values of ϵ are above the Schwinger limit for the given BH mass and all result in ∼ MeV photons which are in the energy range observable by the GBM. These constraints are listed in Table <ref>. Assuming all BHs are “hairy" BHs capable of producing this signal, the current upper bounds on ϵ are ϵ<10^-5 for 10, 30, 40 M_⊙ BHs and ϵ<10^-4 for 20, 50 M_⊙ BHs since no high energy EM signal was observed from these BBH mergers. These constraints will improve as more GW events with concurrent Fermi observations are detected.
There has been one observation of a GRB and BBH merger occurring concurrently. An offline search of Fermi GBM data following the detection of GW150914 <cit.> revealed a 1 second short GRB occurring 0.4 seconds after the LIGO trigger with a localization consistent with that of the GW signal <cit.>. The GRB transient was near the detection threshold for the GBM (2.9σ detection), was not detected by any other instrument, and the localization was poorly constrained <cit.>. Thus, the GRB cannot be confidently associated with GW150914. Assuming that the GRB did indeed originate from the BBH merger, we can identify the range of ϵ that is consistent with the measured properties of the merger and GRB. From the GW Transient Catalog, the constituent BH masses range from ∼30-40 M_⊙ and the range of luminosity distances is 270-590 Mpc. For these masses and distances, the minimum value of ϵ such that the event is observable with the Fermi GBM ranges from ϵ_min∼10^-7-10^-6, resulting in peak photon energies of ∼ 1-3 MeV. This range of peak photon energies is consistent with the properties of the observed GRB, which peaked near an MeV <cit.>. Since this event was barely above the GBM's detection threshold, we anticipate ϵ∼ϵ_min. Therefore, the observed GRB is consistent with a GRB produced via rapid EM emission directly from a 30-40 M_⊙ “hairy" BH for ϵ∼10^-7-10^-6.
§ GALACTIC COSMIC RAY SIGNAL BELOW THE SCHWINGER LIMIT
Below the Schwinger limit the electric field emitted by the BH propagates outwards, accelerating ambient charged particles. Mutual attraction between the protons and electrons inhibits charge separation, requiring that the particles have the same Lorentz factor on average. Since the protons are a factor of ∼10^3 heavier than the electrons and the particle energy scales with mass, most of the BH field energy is absorbed by protons. Thus, the dynamics of the system are set by the protons.
A rapid burst of isotropic EM radiation can be generically described by a coherent single-wavelength pulse. The exact duration and coherence of emission is model-dependent, but so long as the emission occurs on the short timescales characteristic of BBH mergers, the signal is sufficiently similar to the single-wavelength pulse approximation. First, we consider the acceleration of a single charged particle due to this strong EM pulse. In the (+ - - -) metric, the relativistic Lorentz force law is
du^α/dτ=e/mF^αβu_β
where e is the elementary charge, m is the proton or electron mass, u^α is the particle's 4-velocity, and τ is the proper time in the instantaneous rest frame of the particle. For a sinusoidal EM field, the characteristic timescale of field variation is ℰ/dℰ/dt∼ 1/f where ℰ is the electric field magnitude and f is the frequency of the BH radiation. Since f∼10^-17 MeV is very small, this characteristic timescale is very long. So, the field magnitude is nearly constant in time and the system can be approximated by a constant crossed field. For the ranges of ϵ considered here, the absorption length for the EM field is long enough that the average charged particle experiences a sufficiently weak field to neglect any special relativistic effects that transform the field but also a sufficiently strong field that the protons are quickly accelerated to v∼ 1 and radiate negligibly. We choose a coordinate system such that the electric field is directed along the x-axis and the magnetic field along the y-axis, then the Faraday tensor is
F^αβ=[ 0 -ℰ 0 0; ℰ 0 0 0; 0 0 0 ℰ; 0 0 -ℰ 0 ]
where ℰ is the magnitude of the electric field in MeV^2. Since e/mF^αβ is constant, the Lorentz force equation has a matrix exponential solution
u^α(τ)=exp(e/mτ F^α_β)u^β(0).
The particles accelerate from the thermal speed of the plasma (v∼ 0), fixing u^β(0).
We compute u^α(τ) for the given Faraday tensor and use this to extract the Lorentz factor, γ, and the components of the 3-velocity in the lab frame, v.
γ(τ) =1+e^2ℰ^2τ^2/2m^2
v_x(τ) =2eℰmτ/2m^2+e^2ℰ^2τ^2
v_y(τ) =0
v_z(τ) =1-2m^2/2m^2+e^2ℰ^2τ^2
The final kinetic energy of the particle depends on the time at which the particle exits the field in its instantaneous rest frame, τ_f. By definition, γ(τ)=dt/dτ. Thus,
∫_0^τ_fγ(τ)dτ=τ_f+e^2ℰ^2τ_f^3/6m^2=∫_0^t_fdt=1/f
since in the lab frame the particle is in the field for t_f=1/f. The magnitude of the EM field is sufficiently large, and we can neglect the term that is linear in τ_f to find
τ_f=(6m^2/e^2ℰ^2f)^1/3
= 1.2×10^81/ℰ^2/3(M/M_⊙)^1/3 MeV^-1,
where here, and in all following numerical expressions, we set m=m_p.
Now we characterize the absorption of energy from the BH EM field by ambient protons. We partition the volume around the BH into spherical shells one wavelength in thickness such that the distance from the BH is parameterized by the dimensionless number j, the number of wavelengths from the Schwarzschild radius of BH. Let E be the total EM energy incident onto a shell j wavelengths from the BH. Assuming j is large, the initial energy density of the shell is
u = E/4/3π[(r_s+jλ)^3-(r_s+(j-1)λ)^3]∼E/4 πλ^3 j^2
= 3.0×10^-51E/ j^2(M/M_⊙)^-3 MeV^4.
Then the magnitude of the electric field is
ℰ=√(u)
=5.4×10^-26(E/j^2)^1/2(M/M_⊙)^-3/2 MeV^2.
The kinetic energy of each proton after interacting with the EM wave is
K∼γ(τ_f)m
= 1.0× 10^-5(E/j^2)^1/3(M/M_⊙)^-1/3 MeV.
This gives the kinetic energy of one proton in the jth shell. To derive the total kinetic energy lost in the jth shell, we assume a homogeneously distributed number density of charged particles with n=n_p=n_e∼ 1 cm^-3∼ 10^-32 MeV^3, consistent with the Milky Way interstellar medium <cit.>. The total number of protons in the jth shell is
N∼ 4 π n λ^3 j^2=2.6×10^18j^2 (M/M_⊙)^3.
So the total kinetic energy absorbed in the jth shell is
K_tot=N K=2.6×10^13j^4/3E^1/3(M/M_⊙)^8/3 MeV.
This gives a differential equation for the energy absorbed by the field
d E/d j=-K_tot
subject to the initial condition E(0)=ϵ M. This equation can be integrated to yield
E(j)= (1.1×10^40ϵ^2/3(M/M_⊙)^2/3-7.5×10^12(M/M_⊙)^8/3j^7/3)^3/2 MeV.
Solving for the value of j at which E(j)=0 gives the dimensionless absorption length, j_abs,
j_abs= 4.4×10^11ϵ^2/7(M/M_⊙)^-6/7.
The absorption length, j_absλ, gives the number of protons accelerated by the field and the average kinetic energy and Lorentz factor of each proton.
N_abs =4/3π n (j_absλ)^3=7.1×10^52ϵ^6/7(M/M_⊙)^3/7
K_avg =ϵ M/N_abs=1.6×10^7ϵ^1/7(M/M_⊙)^4/7 MeV
γ_avg ∼KE_avg/m=1.7×10^4ϵ^1/7(M/M_⊙)^4/7
The number of electrons accelerated by the field is also N_abs and the average Lorentz factor of the electrons must be the same as that for the protons.
We perform this calculation for values of ϵ ranging from ϵ=10^-20 to the Schwinger limit for 1-50 M_⊙ BHs. The resulting average proton kinetic energies as a function of ϵ are plotted in Figure <ref>. Overall, the average kinetic energy per proton ranges from 20 GeV for a 1 M_⊙ BH with ϵ=10^-20 to 20 TeV for a 50 M_⊙ BH at the Schwinger limit (ϵ∼10^-6). The average proton kinetic energies can be used to calculate the corresponding average electron kinetic energies. Overall, the average kinetic energy per electron ranges from 0.01 GeV for a 1 M_⊙ BH with ϵ=10^-20 to 10 GeV for a 50 M_⊙ BH at the Schwinger limit. These relativistic protons and electrons are cosmic rays.
Cosmic rays of these energies are difficult to identify with a point source on the sky. Ambient magnetic fields confine these cosmic rays to their host galaxies and cause them to quickly diffuse (on timescales ≲ 0.01 years) into the galactic background of cosmic rays. Thus, these cosmic rays are indistinguishable from cosmic rays produced by supernova remnants <cit.>, stellar winds or flares <cit.>, and other processes. Although these particles create secondary signals as they lose energy to bremsshtrahlung, ionization, synchrotron radiation and inverse Compton scattering for electrons and to inelastic collisions for protons <cit.>, the timescales for these processes is much longer than the diffusion time (≳ 10^6 years). Therefore, these secondary signals are also indistinguishable from those produced by other cosmic ray processes.
Because these cosmic rays are mixed with and are indistinguishable from cosmic rays produced via other mechanisms, it is not possible to confidently identify cosmic rays due to EM radiation from BHs either in external galaxies or in the Milky Way. Therefore, it becomes difficult to place strong constraints on ϵ below the Schwinger limit. Although this avenue cannot constrain ϵ, the effect is still potentially observable. Should the BBH merger occur on a sufficiently strong magnetic field background (B>10 MeV^2=0.05 T), the ultrarelativistic electrons would produce synchrotron radiation in the X-ray band, motivating X-ray observations of BBH mergers.
§ DISCUSSION AND CONCLUSIONS
The theoretical understanding of BHs is inconsistent, motivating searches for generic signals of deviations from canonical BH models. In this paper, we have constrained a broad class of “hairy" BH models capable of emitting a fraction of their mass as EM radiation. Since this radiation is sourced directly from the BH, it must tunnel out of the BH's gravitational well in the same manner as Hawking radiation. Thus, the characteristic frequency of the radiation depends only on the mass of the BH, resulting in a signal that is generic and model-independent. We derive the critical value of ϵ, the fraction of the BH mass released as radiation, above which the field strength triggers a GRB and below which ambient particles are accelerated to cosmic ray energies. Because no extragalactically-observable EM signal is expected from a stellar-mass BBH merger, we find that concurrent observations of BBH mergers with GW detectors and EM radiation instruments offer the best data to detect such a signal.
In the GRB regime, the BH mass and ϵ fix the initial volume and temperature of the electron-positron fireball. The fireball expands relativistically, maintaining constant temperature in the frame of an Earth observer and cooling in its comoving frame. Once the fireball is sufficiently cool in its frame, pair-production freezes out and the photons free stream. In the frame of an Earth observer, these photons have energies described by a black body spectrum at the initial temperature of the fireball. Thus, the energy deposited by the BH is re-emitted as gamma-rays over a short timescale. By cross-referencing GW events with concurrent Fermi GBM observations of the localization region, we place upper bounds on ϵ. These bounds are ϵ<10^-5-10^-4 for 10-50 M_⊙ BHs depending on the BH mass since no high energy EM signal was observed from these BBH mergers. These constraints will improve as more GW events with concurrent Fermi observations are detected. We also discuss the weak detection of a GRB following GW150914, and find that this event is consistent with a GRB produced via rapid EM emission directly from a “hairy" BH for ϵ∼ 10^-7-10^-6.
Below the Schwinger limit, the EM radiation can be described by a constant-crossed field. The dynamics of the system are fixed by the ambient protons, which are rapidly accelerated to v∼ 1 by the field, absorbing energy as the radiation propagates away from the BH. We solve the differential equation describing the energy lost by the field to calculate the absorption length as a function of BH mass and ϵ. This absorption length fixes the average energy of the ambient protons and electrons that interacted with the BH radiation. For 1-50 M_⊙ BHs and ϵ ranging from 10^-20 to the Schwinger limit, the average kinetic energy per proton ranges from 20 GeV-20 TeV and the energy per electron ranges from 0.01-10 GeV. At these energies, cosmic rays have a short diffusion length due to the galactic magnetic field and are mixed in with other astrophysical cosmic rays. Additionally, the secondary signals from cosmic rays of these energies are produced on too long of a timescale to be attributed to EM radiation directly from a BH. Overall, constraining ϵ in this less energetic regime is difficult. Future work could investigate BBH mergers in strong background magnetic fields. In this case, the ultrarelativistic electrons emit X-rays via synchrotron radiation that may be observable.
Although this work benefits from employing a model-independent approach to generically characterize radiation emitted directly from “hairy" BHs, model-dependent effects will need to be included to augment this general parameterization. Some “hairy" BH models, such as the firewall BH <cit.>, are capable of producing EM radiation. But currently, no complete model able to quantitatively characterize this effect exists. We hope this paper will motivate others in the field to work through the details of such models. The firewall BH metric of Ref. <cit.> is particularly well-suited for this. Given this metric and a parameterized charge distribution adhered to the firewall, one could use numerical relativity to simulate the emission of gravitational and EM radiation during a BBH merger involving a firewall BH. This approach offers independent constraints on ϵ as a function of the charge distribution parameter and may be able to constrain ϵ in cases below the Schwinger limit.
Strengthening constraints on ϵ will be an important pursuit for future work given the relatively loose bounds found in this paper. In general, “hairy" BHs are well-motivated by the BH information paradox. However, such BHs must appear nearly canonical to a distant observer due to observational evidence in favor of the “no hair" theorem. Therefore, novel approaches to constraining “hairy" models will continue to be vital in the search for new fundamental physics.
We thank Surjeet Rajendran and David Kaplan for useful discussions. We also thank Nadia Zakamska and Erwin Tanin for their edits to this manuscript.
This work is supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE2139757. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation.
10
Abbott_2019
B.P. Abbott, R. Abbott, T.D. Abbott, S. Abraham, F. Acernese,
K. Ackley et al., GWTC-1: A Gravitational-Wave Transient Catalog of
Compact Binary Mergers Observed by LIGO and Virgo during the First and Second
Observing Runs,
https://doi.org/10.1103/PhysRevX.9.031040Physical Review X
9 (2019) 031040.
Vayner_2021
A. Vayner, N. Zakamska, S.A. Wright, L. Armus, N. Murray and
G. Walth, Multiphase Outflows in High-redshift Quasar Host
Galaxies, https://doi.org/10.3847/1538-4357/ac2b9eThe Astrophysical Journal
923 (2021) 59.
Komossa_2015
S. Komossa, Tidal disruption of stars by supermassive black holes:
Status of observations,
https://doi.org/10.1016/j.jheap.2015.04.006Journal of High
Energy Astrophysics 7 (2015) 148.
Broderick_2014
A.E. Broderick, T. Johannsen, A. Loeb and D. Psaltis, Testing
the No-hair Theorem with Event Horizon Telescope Observations of Sagittarius
A*, https://doi.org/10.1088/0004-637X/784/1/7The Astrophysical Journal 784 (2014) 7 [https://arxiv.org/abs/1311.55641311.5564].
Psaltis_2016
D. Psaltis, N. Wex and M. Kramer, A Quantitative Test of the
No-hair Theorem with Sgr A* Using Stars, Pulsars, and the Event Horizon
Telescope, https://doi.org/10.3847/0004-637X/818/2/121The Astrophysical Journal
818 (2016) 121
[https://arxiv.org/abs/1510.003941510.00394].
Wang_2022
D. Wang, Shaving the Hair of Black Hole with Sagittarius A^* from
Event Horizon Telescope,
https://doi.org/10.48550/arXiv.2205.08026arXiv e-prints (2022)
arXiv:2205.08026 [https://arxiv.org/abs/2205.080262205.08026].
Isi_2019
M. Isi, M. Giesler, W.M. Farr, M.A. Scheel and S.A. Teukolsky,
Testing the No-Hair Theorem with GW150914,
https://doi.org/10.1103/PhysRevLett.123.111102Phys. Rev. Lett. 123 (2019) 111102 [https://arxiv.org/abs/1905.008691905.00869].
Wang_2022_gw
K. Wang, Retesting the no-hair theorem with GW150914,
https://doi.org/10.1140/epjc/s10052-022-10049-xEuropean
Physical Journal C 82 (2022) 125
[https://arxiv.org/abs/2111.009532111.00953].
Sadeghian_2011
L. Sadeghian and C.M. Will, Testing the black hole no-hair theorem
at the galactic center: perturbing effects of stars in the surrounding
cluster,
https://doi.org/10.1088/0264-9381/28/22/225029Classical and
Quantum Gravity 28 (2011) 225029
[https://arxiv.org/abs/1106.50561106.5056].
Qi_2021
H. Qi, R. O'Shaughnessy and P. Brady, Testing the black hole
no-hair theorem with Galactic Center stellar orbits,
https://doi.org/10.1103/PhysRevD.103.084006Phys. Rev. D 103 (2021) 084006 [https://arxiv.org/abs/2011.022672011.02267].
Hawking_1976
S.W. Hawking, Breakdown of predictability in gravitational collapse,
https://doi.org/10.1103/PhysRevD.14.2460Phys. Rev. D
14 (1976) 2460.
Raju_2022
S. Raju, Lessons from the information paradox,
https://doi.org/10.1016/j.physrep.2021.10.001Physics Reports
943 (2022) 1.
Kaplan_2019
D.E. Kaplan and S. Rajendran, Firewalls in general relativity,
https://doi.org/10.1103/PhysRevD.99.044033Phys. Rev. D 99
(2019) 044033 [https://arxiv.org/abs/1812.005361812.00536].
Hawking_1975
S.W. Hawking, Particle creation by black holes,
https://doi.org/10.1007/BF02345020Communications in
Mathematical Physics 43 (1975) 199.
LIGO_2015
J. Aasi, J. Abadie, B.P. Abbott, R. Abbott, T. Abbott,
M.R. Abernathy et al., Characterization of the LIGO detectors during
their sixth science run,
https://doi.org/10.1088/0264-9381/32/11/115012Classical and
Quantum Gravity 32 (2015) 115012
[https://arxiv.org/abs/1410.77641410.7764].
Acernese_2015
F. Acernese, M. Agathos, K. Agatsuma, D. Aisa, N. Allemandou,
A. Allocca et al., Advanced Virgo: a second-generation
interferometric gravitational wave detector,
https://doi.org/10.1088/0264-9381/32/2/024001Classical and
Quantum Gravity 32 (2015) 024001
[https://arxiv.org/abs/1408.39781408.3978].
Mandel_2016
I. Mandel and S. Mink, Merging binary black holes formed through
chemically homogeneous evolution in short-period stellar binaries,
https://doi.org/10.1093/mnras/stw379Monthly Notices of the
Royal Astronomical Society 458 (2015) .
Loeb_2016
A. Loeb, Electromagnetic Counterparts to Black Hole Mergers Detected
by LIGO, https://doi.org/10.3847/2041-8205/819/2/L21The Astrophysical Journal
819 (2016) L21
[https://arxiv.org/abs/1602.047351602.04735].
Perna_2016
R. Perna, D. Lazzati and B. Giacomazzo, Short Gamma-Ray Bursts
from the Merger of Two Black Holes,
https://doi.org/10.3847/2041-8205/821/1/L18The Astrophysical Journal 821 (2016) L18 [https://arxiv.org/abs/1602.051401602.05140].
Woosley_2016
S.E. Woosley, The Progenitor of GW150914,
https://doi.org/10.3847/2041-8205/824/1/L10The Astrophysical Journal 824 (2016) L10 [https://arxiv.org/abs/1603.005111603.00511].
Dai_2017
L. Dai, J.C. McKinney and M.C. Miller, Energetic constraints on
electromagnetic signals from double black hole mergers,
https://doi.org/10.1093/mnrasl/slx086Monthly Notices of the Royal Astronomical Society 470
(2017) L92 [https://arxiv.org/abs/1611.007641611.00764].
Fedrow_2017
J.M. Fedrow, C.D. Ott, U. Sperhake, J. Blackman, R. Haas,
C. Reisswig et al., Gravitational Waves from Binary Black Hole
Mergers inside Stars,
https://doi.org/10.1103/PhysRevLett.119.171103Phys. Rev. Lett. 119 (2017) 171103 [https://arxiv.org/abs/1704.073831704.07383].
Kimura_2017
S.S. Kimura, S.Z. Takahashi and K. Toma, Evolution of an accretion
disc in binary black hole systems,
https://doi.org/10.1093/mnras/stw3036Monthly Notices of the Royal Astronomical Society 465
(2017) 4406 [https://arxiv.org/abs/1607.019641607.01964].
Zhang_2016
B. Zhang, Mergers of Charged Black Holes: Gravitational-wave Events,
Short Gamma-Ray Bursts, and Fast Radio Bursts,
https://doi.org/10.3847/2041-8205/827/2/L31The Astrophysical Journal 827 (2016) L31 [https://arxiv.org/abs/1602.045421602.04542].
Lyutikov_2016
M. Lyutikov, Fermi GBM signal contemporaneous with GW150914 - an
unlikely association,
https://doi.org/10.48550/arXiv.1602.07352arXiv e-prints (2016)
arXiv:1602.07352 [https://arxiv.org/abs/1602.073521602.07352].
Parikh_2000
M.K. Parikh and F. Wilczek, Hawking Radiation As Tunneling,
https://doi.org/10.1103/PhysRevLett.85.5042Phys. Rev. Lett. 85
(2000) 5042 [https://arxiv.org/abs/hep-th/9907001hep-th/9907001].
Reynolds_1990
R.J. Reynolds, The Low Density Ionized Component of the Interstellar
Medium and Free-Free Absorption at High Galactic Latitudes, in Low
Frequency Astrophysics from Space, N.E. Kassim and K.W. Weiler, eds.,
vol. 362, p. 121 (1990), https://doi.org/10.1007/3-540-52891-1DOI.
Heisenberg_1936
W. Heisenberg and H. Euler, Consequences of Dirac's theory of
positrons, https://doi.org/10.1007/BF01343663Z. Phys.
98 (1936) 714
[https://arxiv.org/abs/physics/0605038physics/0605038].
Schwinger_1951
J. Schwinger, On gauge invariance and vacuum polarization,
https://doi.org/10.1103/PhysRev.82.664Phys. Rev. 82 (1951) 664.
Lieu_1998
R. Lieu, Y. Takahashi and T.W.B. Kibble, Gamma Ray Burst as Vacuum
Discharge of Super-Schwinger Electric Fields,
https://doi.org/10.48550/arXiv.astro-ph/9803072arXiv e-prints
(1998) astro [https://arxiv.org/abs/astro-ph/9803072astro-ph/9803072].
Goodman_1986
J. Goodman, Are gamma-ray bursts optically thick?,
https://doi.org/10.1086/184741The Astrophysical Journal 308 (1986)
L47.
Paczynski_1986
B. Paczynski, Gamma-ray bursters at cosmological distances,
https://doi.org/10.1086/184740The Astrophysical Journal 308 (1986)
L43.
Weinberg_1972
S. Weinberg, Gravitation and Cosmology: Principles and Applications of
the General Theory of Relativity (1972).
Piran_1993
T. Piran, A. Shemi and R. Narayan, Hydrodynamics of Relativistic
Fireballs, https://doi.org/10.1093/mnras/263.4.861Monthly Notices of the Royal Astronomical Society
263 (1993) 861.
Kumar_2015
P. Kumar and B. Zhang, The physics of gamma-ray bursts &
relativistic jets,
https://doi.org/10.1016/j.physrep.2014.09.008Physics Reports
561 (2015) 1 [https://arxiv.org/abs/1410.06791410.0679].
Meszaros_2006
P. Meszaros, Gamma-Ray Bursts,
https://doi.org/10.48550/arXiv.astro-ph/0605208arXiv e-prints
(2006) astro [https://arxiv.org/abs/astro-ph/0605208astro-ph/0605208].
Piran_1999
T. Piran, Gamma-ray bursts and the fireball model,
https://doi.org/10.1016/S0370-1573(98)00127-6Physics Reports
314 (1999) 575
[https://arxiv.org/abs/astro-ph/9810256astro-ph/9810256].
Dermer_2013
C.D. Dermer, Sources of GeV Photons and the Fermi Results,
https://doi.org/10.1007/978-3-642-36134-0_3Saas-Fee Advanced
Course 40 (2013) 225
[https://arxiv.org/abs/1202.28141202.2814].
FermiGBM
The Fermi GBM Collaboration, Overview of the Fermi GBM, https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/Cicerone/Cicerone_Introduction/GBM_overview.html2020.
Aso_2013
Y. Aso, Y. Michimura, K. Somiya, M. Ando, O. Miyakawa, T. Sekiguchi
et al., Interferometer design of the KAGRA gravitational wave
detector, https://doi.org/10.1103/PhysRevD.88.043007Phys. Rev. D
88 (2013) 043007
[https://arxiv.org/abs/1306.67471306.6747].
GCN_26454
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S191216ap.gcn3GCN 26454, .
GCN_25752
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190915ak.gcn3GCN 25752, .
GCN_24098
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190412m.gcn3GCN 24098, .
GCN_24629
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190521r.gcn3GCN 24629, .
GCN_24948
LIGO and Virgo Collaborations, https://gcn.gsfc.nasa.gov/other/S190701ah.gcn3GCN 24948, .
GWTC3
R. Abbott, The LIGO Scientific Collaboration, the Virgo Collaboration,
the KAGRA Collaboration, T.D. Abbott, F. Acernese et al.,
GWTC-3: Compact Binary Coalescences Observed by LIGO and Virgo During
the Second Part of the Third Observing Run,
https://doi.org/10.48550/arXiv.2111.03606arXiv e-prints (2021)
arXiv:2111.03606 [https://arxiv.org/abs/2111.036062111.03606].
GWTC2
R. Abbott, Abbott, The LIGO Scientific Collaboration, T.D. the Virgo
Collaboration, F. Acernese, K. Ackley et al., GWTC-2.1: Deep
Extended Catalog of Compact Binary Coalescences Observed by LIGO and Virgo
During the First Half of the Third Observing Run,
https://doi.org/10.48550/arXiv.2108.01045arXiv e-prints (2021)
arXiv:2108.01045 [https://arxiv.org/abs/2108.010452108.01045].
Abbott_2016
B.P. Abbott, R. Abbott, T.D. Abbott, M.R. Abernathy, F. Acernese,
K. Ackley et al., Localization and Broadband Follow-up of the
Gravitational-wave Transient GW150914,
https://doi.org/https://doi.org/10.3847
826 (2016) L13
[https://arxiv.org/abs/1602.084921602.08492].
Connaughton_2016
V. Connaughton, E. Burns, A. Goldstein, L. Blackburn, M.S. Briggs,
B.B. Zhang et al., Fermi GBM Observations of LIGO Gravitational-wave
Event GW150914,
https://doi.org/10.3847/2041-8205/826/1/L6The Astrophysical Journal 826 (2016) L6 [https://arxiv.org/abs/1602.039201602.03920].
Canto_1977
J. Canto, On the density and energy of supernova remnants.,
https://ui.adsabs.harvard.edu/abs/1977A A....61..641C
Astronomy and Astrophysics 61 (1977) 641.
Reynolds_1992
R.J. Reynolds, The warm ionized medium,
https://doi.org/10.1063/1.44005AIP Conference Proceedings
278 (1992) 156
[https://arxiv.org/abs/https://aip.scitation.org/doi/pdf/10.1063/1.44005https://aip.scitation.org/doi/pdf/10.1063/1.44005].
Miroshnichenko_2001
L.I. Miroshnichenko, Solar Cosmic Rays, vol. 260 (2001),
https://doi.org/10.1007/978-94-015-9646-610.1007/978-94-015-9646-6.
Longair_1992
M.S. Longair, High energy astrophysics. Vol.1: Particles, photons and
their detection (1992).
Longair_1994
M.S. Longair, High energy astrophysics. Vol.2: Stars, the galaxy and
the interstellar medium, vol. 2 (1994).
|
http://arxiv.org/abs/2307.05434v2 | 20230711165529 | Embedded symmetric positive semi-definite machine-learned elements for reduced-order modeling in finite-element simulations with application to threaded fasteners | [
"Eric Parish",
"Payton Lindsay",
"Timothy Shelton",
"John Mersch"
] | math.NA | [
"math.NA",
"cs.NA"
] |
We present a machine-learning strategy for finite element analysis of solid mechanics wherein we replace complex portions of a computational domain with a data-driven surrogate. In the proposed strategy, we decompose a computational domain into an “outer" coarse-scale domain that we resolve using a finite element method (FEM) and an “inner" fine-scale domain. We then develop a machine-learned (ML) model for the impact of the inner domain on the outer domain. In essence, for solid mechanics, our machine-learned surrogate performs static condensation of the inner domain degrees of freedom. This is achieved by learning the map from (virtual) displacements on the inner-outer domain interface boundary to forces contributed by the inner domain to the outer domain on the same interface boundary. We consider two such mappings, one that directly maps from displacements to forces without constraints, and one that maps from displacements to forces by virtue of learning a symmetric positive semi-definite (SPSD) stiffness matrix. We demonstrate, in a simplified setting, that learning an SPSD stiffness matrix results in a coarse-scale problem that is well-posed with a unique solution. We present numerical experiments on several exemplars, ranging from finite deformations of a cube to finite deformations with contact of a fastener-bushing geometry. We demonstrate that enforcing an SPSD stiffness matrix is critical for accurate FEM–ML coupled simulations, and that the resulting methods can accurately characterize out-of-sample loading configurations with significant speedups over the standard FEM simulations.
One-Versus-Others Attention: Scalable Multimodal Integration
G. W. Morley
August 12, 2023
============================================================
§ INTRODUCTION
Finite element analysis of component and assembly-level systems remains computationally intensive despite the tremendous advancements in algorithmic research and computing power that have occurred over the past several decades. This problem is exacerbated when the underlying problem is multiscale in nature, wherein the physical discretization of the governing equations requires resolving a wide range of length and/or time scales. One such example that motivates the present work is that of systems-level models that involve threaded fasteners. In such systems, fasteners can be an integral connector of many sub-components, and accurately modeling the behavior of the fastener is critical for many analyses. Unfortunately, directly resolving the full fastener within a finite element model is difficult: the geometries are challenging to mesh, the resulting meshes can require hundreds of thousands of degrees of freedom, and the required mechanics that need to be simulated are complex as they often involve contact, friction, etc. As an example, Figure <ref> depicts a schematic of a ratcheting mechanism; fasteners are an essential part of this complex mechanism and must capture relevant mechanics to properly assess quantities of interest. In general, directly resolving each fastener is often not feasible in systems-level models, and approaches capable of reducing this computational burden are needed.
Various approaches have been developed to reduce computational complexities in finite element methods. These methods include the variational multiscale method <cit.>, Guyan reduction <cit.> Craig–Bampton reduction <cit.>, the generalized finite element method <cit.>, multiscale methods <cit.>, and reduced basis/proper-orthogonal decomposition reduced-order models <cit.>. Machine learning-based techniques have garnered significant attention recently and are the focus of the present literature review. “Smart" finite elements were introduced in <cit.>, wherein a machine-learned regression model was used to learn a direct relationship between the internal state of an element and its forces. The approach enforces frame indifference and conservation of linear and angular momentum and was shown to reduce the computational cost as opposed to a traditional finite element model. Refs. <cit.> proposed additional techniques for computing the tangent stiffness matrix of smart elements based on finite differences, automatic differentiation, and neural networks. In addition, <cit.> proposes a composite loss function with terms for both the element forces as well as the tangent stiffness. Similarly, <cit.> proposes nonlinear meta elements (i.e., element patches) using deep learning. Unlike <cit.> this approach learns generalizable meta elements (where each meta element comprises multiple finite elements). The primary input to the regression model defining the meta element is the displacement field on element boundary, while the outputs are the displacements, stresses, and forces within the meta element. Lastly, <cit.> explores quantifying uncertainties in machine-learned elements that map from displacements to forces by virtue of an extension to deep ensembles <cit.>.
r0.5
< g r a p h i c s >
Schematic of a ratcheting mechanism. Fasteners are an integral part of this complex mechanism and must capture relevant mechanics to properly assess quantities of interest.
The aforementioned approaches seek to learn a generalizable (meta-)element that can be applied repeatedly throughout the domain. Another similar body of work focuses on domain-decomposition in where a computational domain is divided into subdomains. Once partitioned, each subdomain can be solved independently, and a global solution is obtained by coupling the various domains, e.g., with the Schwarz method <cit.>.
Data-driven techniques have received significant attention in this field as well, where the general idea is to use a cheaper-to-evaluate surrogate to model a complex subdomain. Refs. <cit.>, for instance, proposed D3M and DeepDDM, both of which employ physics-informed neural networks (PINNs) to learn subdomain-solutions conditioned on boundary data, and demonstrated their approaches on Poisson's equation and Schrodinger's equation. We note that D3M and DeepDDM employ PINNs for all subdomains, as opposed to a FEM for some of the subdomains. Refs. <cit.> propose domain decomposition strategies leveraging reduced basis and POD (Petrov)-Galerkin reduced-order models. These methods operate by projecting the finite element model onto low-dimensional subspaces computed, e.g., from training data. The different subdomains are then coupled via, e.g., Lagrange multipliers <cit.>, the Schwarz method <cit.>. Lastly, one final approach of relevance was proposed in <cit.> for multiscale problems, where the authors aim to resolve only the domain of interest by learning an interface condition leveraging long short term memory (LSTM) networks and ideas from upwinding. The setup was demonstrated on the transient Burgers' equation and Euler equations in one dimension.
In the present work, we develop a machine learning strategy for replacing portions of a computational domain, such as threaded fasteners, with a data-driven surrogate. We develop the proposed strategy within the context of hypoelastic material models for static solid mechanics (SM) FEM simulations of bodies under finite deformations exposed to contact and friction; we leave the extension of the approach to nonlinear material models and dynamic simulations as future work. In the proposed strategy we decompose a computational domain into an “outer" domain that we resolve with an FEM, and an “inner" domain. We then develop a machine-learned model for the impact of the inner domain on the outer domain. In essence, for SM our machine-learned surrogate learns the map from displacements on the interface boundary to forces contributed by the inner domain to the outer domain on the same interface boundary (we refer to these forces as internal forces).
We propose two displacement-to-force mappings in the present work. First, we propose a direct displacement-to-force mapping with dimension reduction. In this approach, we identify low-dimensional subspaces for compact approximations to the interface displacements and internal forces via proper orthogonal decomposition (POD) <cit.>. A mapping between reduced coordinates for these fields is then learned via regression; i.e., for interface displacements 𝐮 and internal forces 𝐟, we learn a mapping 𝐟 = 𝐠(𝐮) where 𝐠 is the learned surrogate. The second approach we propose is a structure-preserving displacement-to-stiffness mapping with dimension reduction. In this approach, we again identify low-dimensional structure for the displacements and internal forces on the interface, but instead learn a displacement-to-force mapping in the form of a stiffness matrix, i.e., for interface displacements 𝐮 and internal forces 𝐟, we learn a mapping 𝐟 = 𝐊(𝐮) 𝐮 where 𝐊(𝐮) is a learned stiffness which can be a nonlinear function of the displacements. We further enforce the learned stiffness matrix to be symmetric positive semi-definite (SPSD)[We note that our ML model is only learning a boundary coupling term and enforcing SPSD structure is sufficient to guarantee that the full assembled stiffness matrix in the coarse-scale problem is symmetric positive definite.]. We show, in a simplified setting, that the addition of this SPSD model results in a coarse-scale problem that is symmetric positive definite (SPD) and well-posed. Further, enforcing SPSD structure allows our ML model to be interfaced with common SM solvers such as conjugate gradient. A thematically similar idea was pursued in Ref. <cit.> within the context of learning constitutive relations. As will be shown, we find this latter approach is critical for successful integration of the method into our finite element solver.
The current work has commonalities with several existing efforts. For example, the present approach can be viewed as a smart element or a “meta-element" strategy in which we are learning a meta-element for our inner domain(s). This synergy is particularly relevant if the inner domain is a component which is repeated throughout the computational domain, like a threaded fastener. We propose a more limited approach than <cit.>, however: we aim to remove certain hard-to-resolve portions of a domain rather than develop a general smart element. In this sense, our proposed methodology can be interpreted as a type of Guyan reduction that is applicable to the nonlinear regime. Second, our approach can be viewed as a domain-decomposition method, where we employ an ML model to simulate the impact of fine-scale domain on the coarse-scale domain. Comparing the present work to the literature, we highlight several novelties:
* We propose regression models equipped with a dimension reduction strategy to reduce both the training cost of the model as well as the number of datum required to learn an accurate and generalizable model. This strategy enables our approach to be applied to systems where the ML model must predict thousands of degrees of freedom.
* We propose two types of displacement-to-force mappings: one that directly maps from displacement to forces, and one that maps from displacements to forces by virtue of learning an SPSD stiffness matrix. This second approach is novel. Refs. <cit.>, for example, only consider direct displacement-to-force mappings, and the stiffness matrix associated with these mappings is extracted via, e.g., automatic differentiation. By directly learning a stiffness matrix we can enforce structure in our learned models. Our numerical experiments demonstrate that this embedded structure results in increased robustness. Additionally, the proposed displacement-to-stiffness mapping addresses the issue of stiffness computation studied in <cit.>. In our setting the stiffness is a direct output of the neural network and does not need to be probed with finite differences, computed via back propagation, or estimated with an auxiliary network.
This manuscript proceeds as follows. In Section <ref> we provide the general formulation for the governing equations of linear elasticity, while in Section <ref> we present an equivalent multiscale formulation. Section <ref> outlines our machine-learning framework, and Section <ref> provides a brief commentary on the properties of the model when applied to the equations of linear elasticity in one dimension. Section <ref> provides numerical experiments, while Section <ref> provides conclusions and perspectives.
§ GENERAL FORMULATION FOR LINEAR ELASTICITY
To illustrate our approach, we outline our formulation for the equations of linear elasticity. While we consider bodies undergoing finite deformation with contact in our numerical experiments, we omit these details here for simplicity as extension of our approach to this setting is straightforward. We emphasize that we restrict our attention to systems that are static and path-independent.
The governing equations of linear elasticity can be written on the domain as
-∇·() = in
g() = 0 on
where : →3 is the displacement field, ≡( ) ∈3×3 is the stress tensor, : →3 is a body forcing term, and we use g to denote a set of boundary conditions on the domain boundary such that (<ref>) is well-posed. We consider the standard weighted residual form of Eq. (<ref>), which reads as follows: find ∈ such that
∫_() : ∇ dx - ∫_ (() ·) · ds = ∫_· dx , ∀∈,
where denotes a suitable vector-valued function space.
The Galerkin finite element method comprises a standard approach to solve Eq. (<ref>). To this end, let ⊂ denote a conforming trial space obtained via a finite element discretization of the domain into non-overlapping elements _k, k=1,…, with the node set , | | =. The spatially discrete counterpart to Eq (<ref>) reads: find ∈ such that
∫_( ) : ∇ dx - ∫_( () ·) · ds = ∫_· dx , ∀∈.
Equation (<ref>) comprises a monolithic set of governing equations. It is often the case, however, that the body is made up of multiple subdomains, where each subdomain may require, e.g., different resolution requirements. One such example that motivates the present work is systems-level FEM models comprising threaded fasteners. In this case, the threaded fastener model can require a much finer discretization than the remainder of the domain and can be a computational bottleneck. To mitigate this issue we pursue a data-driven modeling approach.
§ DOMAIN DECOMPOSITION FORMULATION
We consider a decomposition of the domain into two non-overlapping[Formally, both domains are closed.] components such that = ∪ with the interface = ∩ as in Figure <ref>. We take to be an “outer", coarse-scale domain which we aim to solve directly with a finite element method, while we take to be an “inner", fine-scale domain. We aim to not resolve , but rather model its impact on . To this end, we introduce a decomposition of the displacement field
= +
such that = on , = on , and = on .
For simplicity, we will use the notation ≡ and ≡.
With this decomposition, a coarse-scale problem on can be written as: find ∈ such that ∀∈
∫_ : ∇ dx - ∫_ (·) · ds = ∫_· dx + ∫_( ·) · ds,
where is a suitable vector-valued function space on .
Our goal is to solve the coarse-scale equation. To this end, let ⊂ denote a conforming trial space obtained via a finite element discretization of the domain into non-overlapping elements _k, k=1,…, with the node set , | | =. For notational purposes, we additionally denote the boundary node set as ⊂ with | | =.
The spatially discrete counterpart to Eq (<ref>) reads: find ∈ such that ∀∈
∫_ : ∇ dx - ∫_ (·) · ds = ∫_· dx + ∫_( ·) · ds
where, for notational simplicity, we use ≡().
Denoting as the number of degrees of freedom per node, the above can be written as a set of = algebraic equations by introducing the basis functions _i, i=1,…, such that span{_i }_i=1^ =. The state can then be described as = ∑_i=1^_i _i.
The system (<ref>) simplifies to a force balance at each node in the case of Lagrange finite elements.
To describe this we denote our set of basis functions as _i(), i=1,…, such that span{_i}_i=1^ =. Our approximation to the deformation map can be expressed as () = ∑_i=1^_i() _i where _i is the value of the deformation map at the ith node. With these basis functions we can express the deformation map as a sum of nodal values on each of these domains, i.e.,
() = ()^ + ()^ + ()^≡∑_i ∈__i() _i + ∑_i ∈__i() _i + ∑_i ∈__i() _i.
Next, we write the coarse node set as the union of the outer boundary nodes, interior nodes, and inner boundary nodes, i.e., = _∪_∪_.
The governing equations can be simplified to
∫_^ + ^ : ∇_i dx =
∫_ (^·) ·_i ds + ∫_·_i dx i ∈_
∫_^ : ∇_i dx =
∫_·_i dx i ∈_
∫_^ + ^ : ∇_i dx =
∫_·_i dx + ∫_( ·) ·_i ds i ∈_
The system (<ref>)-(<ref>) comprises a force balance at each node.
We observe that the system (<ref>) is unclosed as it requires specification of the boundary term ( ·) ·, i.e., we need to specify the forcing exhibited by the inner domain on the outer domain. We employ a machine-learning approach for this purpose.
§ FORMULATION FOR LINEAR ELASTICITY
Re-write for finite deformation, but the main point doesn't change
We consider the governing equations of small elastic deformations on a body with boundary :
- ∇· = on
( ; ) = 0 on
where ≡(() ) is the stress, is the outward pointing normal on , () = 1/2(∇ + (∇)^T ) is the strain, and comprise a set of boundary conditions on such that Eq. (<ref>) is well-posed. We consider the standard weighted residual form of Eq. (<ref>), which reads as follows: find ∈ such that
∫_σ : ∇ dx - ∫_ (σ·) · ds = ∫_· dx , ∀∈.
The Galerkin finite element method (FEM) comprises a standard approach to solve Eq. (<ref>). To this end, let ⊂ denote a conforming trial space obtained via a finite element discretization of the domain into non-overlapping elements _k, k=1,…, with the node set , =. The spatially discrete counterpart to Eq (<ref>) reads: find ∈ such that
∫_ : ∇ dx - ∫_ (·) · ds = ∫_· dx , ∀∈.
Equation (<ref>) comprises a monolithic set of governing equations. It is often the case, however, that the body is made up of multiple subdomains, where each subdomain may require, e.g., different resolution requirements. One such example of this is component-level FEM models comprising threaded fasteners. In this case, the threaded fastener model requires a much finer discretization than the remainder of the domain and can be a computational bottleneck. Further, it is often the case that the material properties of the threaded fastener themselves are unknown and/or difficult to model. To mitigate these issues we propose a data-driven modeling approach.
§ DOMAIN DECOMPOSITION FORMULATION
We consider a decomposition of the domain into two components such that = ∪ with the interface = ∩ as in Figure <ref>. We take to be the “outer", coarse-scale domain which we aim to solve directly with a finite element method, while we take to be an “inner", fine-scale domain. To this end, we introduce a multiscale decomposition of the state,
= +
such that = 0 on and = 0 on , and = on .
The interior problem on is given as: find ∈ such that ∀∈
∫_ : ∇ dx = ∫_· dx + ∫_( ·) · ds
while the exterior, coarse-scale problem on is given as: find ∈ such that ∀∈
∫_ : ∇ dx - ∫_ (·) · ds = ∫_· dx + ∫_( ·) · ds ,
( ; )= 0 on .
Our goal is to solve the coarse scale equation. To this end, let ⊂ denote a conforming trial space obtained via a finite element discretization of the domain into non-overlapping elements _k, k=1,…, with the node set , =. The spatially discrete counterpart to Eq (<ref>) reads: find ∈ such that ∀∈
∫_ : ∇ dx - ∫_ (·) · ds = ∫_· dx + ∫_( ·) · ds ,
( ; )= 0 on .
In the case of standard piecewise linear finite elements, the above system simplifies to a force balance at each node. To describe this we denote our set of (hex) basis functions as _i(x), i=1,…, such that span{_i}_i=1^ =. Our approximation to the displacement field can be expressed as (x) = ∑_i=1^_i(x) _i where _i is the value of the displacement field at the ith node. Next, we write the coarse node set as the union of the outer boundary nodes, interior nodes, and inner boundary nodes, i.e., = _∪_∪_. Lastly, we express the displacement field as a sum of nodal values on each of these domains, i.e.,
(x) = (x)^ + (x)^ + (x)^≡∑_i ∈__i(x) _i + ∑_i ∈__i(x) _i + ∑_i ∈__i(x) _i^
The governing equations can be simplified to
∫_^ + ^ : ∇_i dx =
∫_ (^·) ·_i ds + ∫_·_i dx i ∈_
∫_^ : ∇_i dx =
∫_·_i dx i ∈_
∫_^ + ^ : ∇_i dx =
∫_·_i dx + ∫_( ·) ·_i ds i ∈_
We observe that closing the system (<ref>)-(<ref>) requires specification of the boundary term in ( ·) ·_i in Eq. (<ref>).
§ MACHINE-LEARNING FRAMEWORK
Developing a method that only requires solving the coarse-scale system requires a model for the impact of the fine scales on the coarse scales. To this end, we propose a coupled FEM-ML formulation wherein the ML model is used to describe the impact of the fine scales on the coarse scales at the interface boundary. This section outlines our coupled FEM-ML formulation, describes the offline-online approach employed to construct the ML model, and lastly outlines several candidate machine-learning models.
§.§ FEM-ML coupled formulation
Closing the coarse-scale equation requires specification of the traction applied by the fine scales on the domain boundary, i.e., we require a model for the term ∫_( ·) · ds. We propose a machine learning approach for this purpose, i.e., we propose to develop a machine learning model (·) = [ _1(·)^T ,…,_(·)^T ]^T ∈ such that
_i( ) ≈∫_( ·) ·_i ds ∈,
for i=1,…,. In the above, _i() denotes the ML model output for the ith basis function for the DOFs (e.g., internal force DOFs in the x_1,x_2, and x_3 directions) and are the to-be-defined features fed into the ML model.
The coarse-scale equation then becomes: find ∈ such that for i=1,…,,
∫_ : ∇_i dx - ∫_ (·) ·_i ds = ∫_·_i dx + _i().
§.§ Offline-online workflow
To develop the data-driven model, we employ an offline–online workflow. In the offline phase, we perform a computationally intensive process that involves solving the full finite element model (<ref>) for different configurations (e.g., different boundary conditions) to obtain displacement fields {^i}_i=1^, where are the number of training samples.
These sample displacement fields are then processed to compute the target training matrix ∈× comprising the terms ∫_(_j ·) ·_i for i=1,…,, j=1,…,, where we have used _j to denote the stress tensor evaluated about the fine-scale displacement field for the jth sample.
Analogously, we collect the feature training matrix ∈×, where are the number of features. In this work we consider the case where the features are the nodal values of the displacement fields, i.e.,
= ∈.
§.§.§ Simplification for Lagrange finite elements
In the present work we exclusively consider application to Lagrange finite elements. In this case the term ∫_(·) ·_i dS is exactly zero for basis functions not on the interface , and we need only collect the target training matrix for the basis functions on the interface. This results in a target training matrix of size ∈×. Analogously, for features we employ only the nodal values of the displacement fields on the interface, i.e., = ^ such that ∈×.
§.§ Machine-learning model structure
Having defined the feature matrix and the response matrix we now outline two candidate approaches for the ML model .
Approach 1: Direct mapping with proper orthogonal decomposition.
The first approach we consider comprises learning a direct feature-to-response mapping with the addition of POD to reduce the dimensionality of the input feature space and the response space. This approach can be described as
: ^↦( ^T ^ ; ) + ,
where ∈ is a constant vector (accounting, e.g., for body forces, preload), ∈× and ∈× are basis matrices for the internal forces and displacements, respectively, obtained through POD, and are the reduced dimension for the displacements and forces, both of which are model hyperparameters, : ×→ comprises a machine-learned model that maps from reduced displacement states to reduced force states, and ∈ are model parameters (obtained from training). The basis matrices for forces and displacements are obtained by solving the optimization problems
= ^* ∈{× | [^*]^T ^* = 𝐈}arg min^* [^*]^T - _2^2,
= ^* ∈{× | [^*]^T ^* = 𝐈}arg min^* [^*]^T - _2^2,
where = - [ ⋯ ].
These optimization problems can be solved efficiently with a (thin) singular value decomposition (SVD) <cit.>. The algorithm is described in <ref>. We emphasize that by employing POD, the training of the model scales with the reduced dimensions and rather than the number of interface nodes.
A variety of regression models can be used to learn the reduced model . Here we investigate two approaches spanning different regimes of model complexity: linear regression and neural networks. We investigate linear regression due to its simplicity and interpretability, while we investigate neural networks due to their high capacity to learn complex functions. These two strategies are now described.
* Linear least squares (LLS). In a linear regression approach the reduced ML model is defined by a matrix ∈× such that the full model can be written as
: ^↦^T ^.
The reduced system matrix is defined by the solution to the linear least-squares problem
= ^* ∈×arg min^T - ^* ^T _2^2.
The least-squares problem can be efficiently solved as it can be broken down into least-squares problems for each row of .
* Neural network (NN)
This method learns a feed forward neural network for the mapping from the (reduced) displacement field to the (reduced) force field. In this case the ML model is given by
: ^↦( ^T ^ ; )
where : ×→ comprises a neural network model that maps from the reduced displacement states to the reduced force states and ∈ are the network weights. These network weights are obtained from minimizing the loss function,
∈minimize∑_i=1^( ^T _i ; ) - ^T _i _2^2.
Approach 2: Stiffness matrix mapping with proper orthogonal decomposition.
The approach outlined above is the simplest learning framework one can employ. This approach, however, does not directly enforce any physical or mathematical properties of the underlying FEM. As will be shown later in this manuscript, we found that embedding the above models into a FEM code often led to unstable solutions. To address this challenge, we propose a second approach that embeds structure into the model by learning an SPSD stiffness matrix. We again use POD to reduce the dimensionality of the input feature and response spaces. This approach can be described as
: ^↦[ ( ^T ^ ; ) ] [ ( ^T ^ ; ) ]^T ^T ^ +
where : ×→× comprises a machine-learned model that maps from the reduced displacement states to the lower triangular part of a reduced stiffness matrix, and ∈×( + ) = orthogonalize( [ , ] ) is a basis matrix combining the (orthogonalized) union of the force basis and displacement basis. We note the orthogonalization can be performed efficiently with a QR decomposition.
Critically, Eq. (<ref>) can be written in matrix form as
( ^) = _ML( ^) ^ + ,
where _ML ( ^ ) = [_ML (^)]^T ∈× is an SPSD stiffness matrix with 𝗋𝖺𝗇𝗄( _ML ( ^ ) ) = +. This structure is directly enforced in the model by learning the lower triangular matrix . Enforcing our model to be SPSD makes the coupled FEM-ML system more amenable for conjugate gradient solvers, which are commonly employed in solid mechanics. Lastly, we note the use of the combined basis for both the force and displacement field, opposed to the individual bases employed in Approach 1. The reason for employing the combined basis is that both the force field and displacement field must employ the same basis for the model (<ref>) to be SPSD.
A variety of regression models can again be used to learn the reduced stiffness matrix . We again consider a linear regression approach and a neural-network-based approach.
* Symmetric positive semi-definite linear least-squares (SPSD-LLS). This approach learns a linear model for the stiffness matrix. This method is described by
: ^↦^T ^T ^ +
where ∈× is defined by the optimization problem
= ^* ∈×arg min^T - ^* [ ^*]^T ^T _2^2.
By definition, SPSD-LLS learns a system matrix that is SPSD. To the best of our knowledge, no analytic solution exists to the optimization problem (<ref>), but it can be solved in a straightforward manner by casting it as a constrained optimization problem. In the present work we solve the optimization problem (<ref>) with the optimization package <cit.>.
* Symmetric positive semi-definite neural network regression (SPSD-NN)
This approach learns a feed forward neural network for the mapping from the (reduced) displacement field to the (reduced) lower triangular stiffness matrix. In this case the ML model is given by
: ^↦[ ( ^T ^ ; ) ] [ ( ^T ^ ; ) ]^T ^T ^ +
where : ×→× is the neural network model for the reduced lower stiffness matrix.
The network weights are obtained from minimizing the loss function,
∈minimize∑_i=1^[ ( ^T _i ; ) ] [ ( ^T _i ; ) ]^T ^T _i - ^T _i _2^2.
In summary, we considered four model forms: a linear model (LLS) and nonlinear (NN) model that directly map from the interface displacements to the interface forces, as well as a linear model (SPSD-LLS) and nonlinear model (SPSD-NN) that directly map from interface displacements to an SPSD stiffness matrix.
We emphasize that the above approaches learn the mapping between displacements and forces with the purpose of removing a subdomain from the computational domain. A similar approach could be to learn, e.g., the strain–stress mapping on the domain boundary. This type of approach was employed in Refs. <cit.>, which leverage neural networks to learn the constitutive relationship defining the stress-strain relationship at every point a computational domain. In the present work we directly pursue a displacement–force mapping for ease of implementation into our finite element solver. Specifically, the contribution of our ML model simply appears as an additional (state-dependent) source term in the residual of the governing equations (as opposed to, e.g., a stress field that needs to be integrated).
The above formulations are designed for path-independent static problems and will be incomplete for dynamic path-dependent problems (e.g., plasticity) where history effects can be important.
§ POSITIVE DEFINITENESS OF COARSE-SCALE MODEL IN A SIMPLIFIED SETTING
In this section we demonstrate that, in the simplified setting of linear elasticity in one-dimension with homogeneous boundary conditions, the SPSD stiffness-based models result in a global coarse scale system that is SPD and, as a result, have invertible system matrices with unique solutions. We further show that, for the direct displacement-to-force model, we cannot guarantee an invertible system with a unique solution. The results here can be extended to the multidimensional case without much difficulty.
The governing equations of linear elasticity in one dimension are
A E ∂^2 /∂ x^2 =
for x ∈ [0,L] where : [0,L] → is the displacement with boundary conditions (0) = (L) = 0. In the above, A ∈^+ is the cross-sectional area and E ∈+ is the modulus of elasticity, and : [0,L] → is a body forcing term. Discretization of (<ref>) into N degrees of freedom with standard Lagrange finite elements results in the system
𝐊 = 𝐪
where
= AE
[ 2 -1 ; -1 2 -1 ; ⋱ ⋱ ⋱ ; -1 2 -1; -1 2; ],
𝐪 =
[ q_1; ⋮; q_N ].
We note that the system is SPD, i.e., ^T > 0, and as such is invertible such that the system (<ref>) has a unique solution.
We now consider the domain decomposition formulation such that the first two and last two nodes are the “coarse-scale" nodes and the remaining inner nodes are the “fine-scale" nodes, i.e., = [_1,_2,_N-1,_N]^T ∈4 and = [_3,⋯,_N-3]^T ∈N-4 with boundary degrees of freedom = [ _2,_N-1]^T ∈2. The resulting coarse-scale problem is
= 𝐪 +
where
=
[ 2 -1 0 0; -1 1 0 0; 0 0 1 -1; 0 0 -1 2; ],
𝐪 =
[ q_1; q_2; q_N-1; q_N ],
are the coarse-scale stiffness matrices and forcing vectors and
=
AE
[ 0 0 0 0; 1 -1 0 0; 0 0 -1 1; 0 0 0 0; ][ _2; _1; _N-4; _3 ]
is the contribution of the interior degrees of freedom on the exterior degrees of freedom at the interface. We emphasize that this term depends on the (unknown) interior degrees of freedom. We further emphasize that the coarse-scale stiffness matrix is SPD such that ^T > 0.
Replacing from Eq. (<ref>) with one of the machine-learned models outlined in Section <ref> results in the coarse-scale problem
+ () = 𝐪.
We now provide a brief analysis of the models outlined in Section <ref>.
* Linear least-squares For this model the reduced system becomes
[ + _𝖬𝖫] = 𝐪 -
where
_𝖬𝖫 =
[ 0 0 0 0; 0 [ ^T ]_11 [ ^T ]_12 0; 0 [ ^T ]_21 [ ^T ]_22 0; 0 0 0 0; ].
As we place no constraints on the construction of , we cannot guarantee that _𝖬𝖫 is positive (semi-)definite, nor can we guarantee a unique solution to the system (<ref>) without knowledge of the operator ^T, which in general is not full rank. As a canonical example, we can consider the case where
^T =
[ -0.5 0; 0 -0.5 ],
which is an admissible solution to the optimization problem (<ref>). In this case the coarse-scale equation stiffness matrix becomes
=
[ 2 -1 0 0; -1 0.5 0 0; 0 0 0.5 -1; 0 0 -1 2; ]
which is singular.
We further observe that, in general, the ML contribution will not be symmetric. Thus, even if invertible, the system (<ref>) cannot in general be solved with the popular conjugate gradient method.
* Neural network. The neural network model results in the coarse-scale equation
+ ( ^T ^ ; ) = 𝐪 - .
Similar to the linear least-squares model, we cannot guarantee that the Newton iteration associated with (<ref>) will involve full-rank matrices. In general the matrices will not be SPD and solution via the conjugate gradient method is inappropriate.
* Symmetric positive semi-definite linear-least-squares The SPSD-LLS model results in the reduced system
[ + _𝖬𝖫] = 𝐪 -
where
_𝖬𝖫 =
[ 0 0 0 0; 0 [ ^T ^T ]_11 [ ^T ^T ]_12 0; 0 [ ^T ^T ]_21 [ ^T ^T ]_22 0; 0 0 0 0; ].
By construction, ^T _𝖬𝖫≥ 0 and, as as a result, ^T[ + _𝖬𝖫] > 0. Thus, the coarse-scale system is SPD. By definition of SPD matrices, the system is invertible with a unique solution and may be solved with the conjugate gradient method.
* Symmetric positive semi-definite neural network. The SPSD-NN model results in the reduced system
[ + _𝖬𝖫() ] = 𝐪 -
where
_𝖬𝖫 =
[ 0 0 0 0; 0 [ ^T ^T ]_11 [ ^T ^T ]_12 0; 0 [ ^T ^T ]_21 [ ^T ^T ]_22 0; 0 0 0 0; ].
Again, by construction, ^T _𝖬𝖫≥ 0 and, as as a result, the coarse-scale system is symmetric positive definite for any . The system is invertible with a unique solution and may be solved with the conjugate gradient method.
In summary, this section demonstrated that, in the simplified setting of linear elasticity in one-dimension, the SPSD stiffness-based models result in coarse-scale systems that are SPD. These systems have invertible system matrices with unique solutions. We further demonstrated that we cannot guarantee the systems emerging from the direct displacement-to-force mappings have unique solutions.
𝐊 =
[ 2 -1 ; -1 1 0 ; 0 0 0; ⋱ ⋱ ⋱ ; 0 1 -1; -1 2; ]
+
[ 0 0 ; 0 1 -1 ; -1 2 -1; ⋱ ⋱ ⋱ ; -1 1 0; 0 0; ],
𝐊 =
[ 2 -1 ; -1 1 0 ; 0 0 0; ⋱ ⋱ ⋱ ; 0 1 -1; -1 2; ]
+
[ 0 0 ; 0 1 -1 ; -1 2 -1; ⋱ ⋱ ⋱ ; -1 1 0; 0 0; ],
§ NUMERICAL EXAMPLES
We now investigate the performance of the proposed models on several exemplars, starting from a canonical cube undergoing finite deformations to a more complex fastener-bushing geometry undergoing finite deformations with contact.
§.§ Implementation in Sierra Solid Mechanics
The machine-learned models outlined in Section <ref> are implemented in the code base,
which is a part of the simulation code suite developed at Sandia National Laboratories.
The ML implementation is built on top of the <cit.>,
which provides a C++ interface for and models.
For the direct displacement-to-force models,
the stiffness matrix associated with the ML element is computed via finite differences,
while for the displacement-to-stiffness models the stiffness matrix is directly output from the neural network.
For all cases, the nonlinear system of equations resulting from the FEM and FEM–ML models is solved via the nonlinear conjugate gradient method with a full tangent preconditioner. We employ hyperelastic material models for all examples and solve the governing equations with finite deformations under a quasi-static approximation. Contact is handled using the Augmented Lagrange algorithm.
We refer the interested reader to the theory manual for more details <cit.>.
We note that, while it would be interesting to assess the performance of more generic solvers for the non-SPSD models (e.g., generalized minimum residual),
these are not available in given the superior performance of conjugate gradient for SM problems.
§.§ Training of ML models
All machine-learned models are trained in . For LLS, we directly solve the least-squares problem using . Given the analytic solution to the optimization problem we do not hold out any data in training. For SPSD-LLS, we solve the optimization problem with the splitting conic solver as implemented in the software <cit.>. We again do not hold out any data for training SPSD-LLS.
Both the NN and SPSD-NN models are implemented and trained in . For training the networks we employ random shuffling and a standard 80/20 training/validation split of the data. The neural networks are trained with the ADAM optimizer for 35000 epochs. We employ an early stopping criteria if the 200-epoch running mean of the validation loss has not decreased. Unless otherwise noted, we employ a batch size of 500 with a learning rate schedule of = {1 × 10^-3, 2× 10^-4, 1 × 10^-4 , 5 × 10^-5, 2 × 10^-5}, where the learning rate is lowered after 500, 1000, 2000, 5000, and 15000 epochs. For network architecture, we employ fully connected neural networks with three hidden layers. The number of nodes in each layer is set to be equivalent to the size of the network input vector (e.g., the reduced basis dimension, K). Lastly, we employ ReLU for our nonlinear activation function. PyTorch code for both the standard neural network architecture and the SPSD neural network architecture are provided in <ref>.
The training stage for our examples involves solving a full FEM that directly resolves the fastener. The coarse and fine scales are then taken to be a subset of this full FEM discretization and the training data are extracted accordingly. In theory, one could employ non-conforming decompositions (e.g., perform the training on one discretization and learn an ML model that is applicable for other meshes via, e.g., interpolation). For simplicity, we do not consider this here. In practice, our training process consists of two phases. First, we solve the full FEM that directly resolves the coarse and fine scales. After this, we extract the fine-scale domain from the model and execute a post-processing step to compute the internal forces.
§.§ Multi-element deformed cube
We begin by considering quasi-static deformation of a cube on the domain ≡ [-1.5 𝗂𝗇,1.5 𝗂𝗇]^3. The “inner" fine-scale domain is defined as = (-0.5 𝗂𝗇 ,0.5 𝗂𝗇)^3 while the outer domain is defined as = -. For discretization, the domain is discretized into 15 × 15 × 15 uniformly sized cubes in each direction resulting in = 152 interface nodes. We employ standard bilinear Lagrange finite elements. Figure <ref> depicts a setup of the problem.
We employ homogeneous Dirichlet boundary conditions on the bottom boundary and Dirichlet boundary conditions on the top boundary, i.e.,
= 0 on Γ_B ,
= [ u_1, u_2,u_3 ] on Γ_U,
where Γ_B = [-1.5 𝗂𝗇,1.5 𝗂𝗇] × [-1.5 𝗂𝗇,1.5 𝗂𝗇] × -1.5 𝗂𝗇 is the lower boundary and Γ_U = [-1.5 𝗂𝗇,1.5 𝗂𝗇]× [-1.5 𝗂𝗇,1.5 𝗂𝗇 ] × 1.5 𝗂𝗇 is the upper boundary. We employ a hypoelastic material model across the entire domain. For ∈ the material model is characterized by a Young's modulus of 28.5e6 with a Poisson's ratio of ν = 0.3. For ∈ the material model is characterized by a Young's modulus of 29 × 10^6 with a Poisson's ratio of ν=0.3. As a quantity of interest (QoI), we consider integrated reaction forces on the bottom of the cube, Γ_B.
For training we solve the cube problem for 16 different non-proportional quasi-static loading trajectories for t ∈ [0𝗌,1𝗌], where t is pseudo-time and a single loading trajectory comprises 100 time steps with the top boundary displaced by a fixed profile. This loading profile is prescribed (in inches) for ∈Γ_U as
_1(t,) = 0 0 ≤ t ≤
+ T < t < T
_3(t,) = 0 0 ≤ t ≤
+ T < t < T
where = = 0.5𝗌. Figure <ref> shows an example loading profile for = 0.3, =-0.1, =0.3, =0.1.
As a training set we examine a hyper-cube of the parameters = (-0.3,0.3), = (-0.1,0.1), = (-0.3,0.3), and = (-0.1,0.1). In all, this results in 1600 total quasi-static solves (16 parameter configurations with 100 time steps per configuration). For testing we examine four loading configurations as described in Table <ref>. We note that this setup comprises training runs that deform the body by up to 10% and testing configurations that deform the body up to 16%, both of which are beyond the small-strain region and result in non-linearities.
§.§.§ Offline training results
We first assess the reduced basis approximations as described in the optimization problems (<ref>) and (<ref>). As there is no body forcing term, we take = 0. Figure <ref> depicts the residual statistical energy (i.e., relative energy contained in the truncated singular values) associated with the reduced basis approximation[Note that the residual statistical energy bounds the error of the reduced basis approximation.]. We observe that both the interface force field and interface displacement field are well-represented with relatively few basis vectors. With just 10 basis vectors the residual statistical energy of both the interface displacement and force fields is less than 10^-9. This quick decay demonstrates that the interface can be characterized with relatively few degrees of freedom, and justifies the ansatz of performing the learning process in the reduced space.
Next, Figure <ref> shows the training error for the various ML models considered as a function of the reduced basis dimension, K, and the number of model parameters, N_w. We observe that LLS yields significantly more accurate results than SPSD-LLS. SPSD-LLS saturates in accuracy around = 10 while LLS continues to converge, reaching a training error of 10^-3. Surprisingly, we next observe that LLS results in more accurate models than NN. In theory, NN should be at least as accurate as LLS. We believe that, for this example, LLS performs better than NN because the data are almost linear (recall we employ a hypoelastic material model). Since LLS has an analytic solution, we are more easily able to optimize the model. We additionally note that we observe a “kink" in the convergence of NN at K=7. We believe that this kink is a result of the stochastic training of the NNs, and we could likely eliminate it by training an ensemble of models and employing cross-validation to select the best model (presently we train one model with early stopping, as described in Section <ref>). Lastly, we observe that SPSD-NN is the best performing method. SPSD-NN results in more accurate models than LLS for a given reduced basis dimension despite having far more parameters to train. We expect that SPSD-NN greatly outperforms NN due to the structure embedded in the model form. In particular, we are learning an SPSD stiffness matrix which is similar to the structure one would expect from theory.
§.§.§ Coupled FEM-ML results
We now examine a posteriori results where the ML model is coupled to the FEM solver. Figure <ref> shows QoI errors for the various models as a function of the reduced dimension. LLS results in accurate models for small reduced basis dimensions but quickly goes unstable as the basis dimension grows past = 7. We emphasize that this instability is not observed in the offline training results and is likely a result of the interplay between the LLS-POD model and the solver. Despite being the least accurate model in training, SPSD-LLS results in stable and accurate models with relative errors for both QoIs of less than 1% for all reduced basis dimensions. NN results in inaccurate models that quickly go unstable for reduced basis dimensions > 3. SPSD-NN is the most accurate and robust model. It is stable for all reduced basis dimensions and results in relative errors of less than 0.1% for ≥ 6.
Next, Figure <ref> shows QoI results for all testing configurations. We plot results for the optimal configurations (i.e., the configuration leading to the lowest error) for each model. The optimal configurations are K=7, K^*=14, K=3, and K^* =10 for LLS, SPSD-LLS, NN, and SPSD-NN, respectively. We observe that all predictions lie on top of the truth data with the exception of the NN model, which deviates slightly from the truth near t=1.0 for testing configuration number 3. Overall, we observe that all models are accurate when they converge. Lastly, Figure <ref> shows the von Mises stress predicted by the SPSD-NN model (left) and full model (right) for training configuration #4. With the contours shown in the figure, we observe no noticeable difference between the FEM–ML and FEM-only solutions.
The results of this elementary example demonstrate the capability of the ML models to replace a subdomain, even when the behavior of the material is in a nonlinear large strain regime. We additionally observed that, while LLS and LLS-POD models were accurate in the offline phase, they rarely led to converged solutions when coupled with the FEM solver. The SPSD-LLS and SPSD-NN models, both of which enforced an SPSD stiffness matrix, were more robust when coupled to the FEM model.
§.§ Fastener undergoing loading with contact
We now consider a more complex example comprising a fastener-bushing geometry undergoing quasi-static radial loading.
This geometry is derived from a test setup which can be found in <cit.>. The NAS1351-3-20P fastener is modeled as a “plug" (fastener head and shank with no modeling of threads) with a 0.187 inch diameter and sits in the middle of the top bushing through hole, initially not in contact. The fastener is connected to the bottom bushing with a contiguous mesh. As the joint is loaded laterally, the 0.003 inch gap between the top bushing and fastener will close and the two volumes will be in contact. The problem is symmetric about the x_3-axis, and symmetry is enforced with a symmetric boundary condition. There is no preload in this case; preload will be considered in the next section. Figure <ref> depicts the problem. The full FEM mesh has N=115,292 degrees of freedom and only comprises half the domain. We remove both the fastener and the immediate domain around the fastener, which results in an interface with = 3348 nodes. We emphasize that contact occurs exclusively within the removed domain, and as a result our ML model must be able to accurately characterize contact. We employ homogeneous Dirichlet conditions on the exterior of the bottom bushing and Dirichlet boundary conditions on the top boundary. The quasi-static loading profile is given in inches for on Γ_U, t ∈ [0,1] by _1(t,) = β t, _3(t,) = 1/400 t; this loading configuration corresponds to radial loading (i.e., a pull at a constant angle) whose angle is a function of the parameter β.
We again employ a hypoelastic material model across the entire domain. For ∈ the material model is characterized by a Young's modulus of 28.5e6 with Poisson's ratio of ν = 0.3. For ∈ the material model is characterized by a Young's modulus of 29 × 10^6 with a Poisson's ratio of ν=0.3. As QoIs, we consider the intergrated reaction forces on the exterior of the bottom bushing in the x_1 and x_3 directions.
We consider 21 training configurations for β = {-0.005 + 0.0005 i}_i=0^20 and t ∈{0.01i}_i=1^101. This dataset comprises quasi-static trajectories with 101 time steps per trajectory for pulls ranging from approximately -65^∘ to 65^∘, which results in 2121 total quasi-static solves.
§.§.§ Offline training results
Figure <ref> demonstrates that the interface degrees of freedom are amenable to dimension reduction by displaying the residual statistical energy associated with the reduced basis approximation. As there is again no body forcing term, we take = 0.
As compared to the previous exemplar, the residual statistical energy in the force approximation decays slower. Overall, however, we again observe that both the interface force field and interface displacement field are well-represented with relatively few basis vectors. With just 10 basis vectors the residual statistical energy of the interface displacement field is less than 10^-10 and the interface force field is less than 10^-6. This quick decay again demonstrates that the interface can be characterized with relatively few dimensions, and justifies the ansatz of performing the learning process in the reduced space.
Figure <ref> shows the training error for the various ML models considered as a function of the reduced basis dimension, K, and the number of model parameters, N_w. We observe that SPSD-LLS results in the highest training errors and that increasing the basis dimension no longer results in lower errors after around = 8. In training, the standard LLS model is much more accurate than SPSD-LLS and results in accurate models that achieve a relative error around 10^-3. We observe that NN is more accurate than LLS for low reduced basis dimensions, but is surprisingly less accurate for larger basis dimensions. Given that the data are inherently nonlinear due to contact, we theorize that this result is due to LLS over-fitting to the data when there are many model parameters; note that LLS is analytically solved and is not subject to any optimization process like NN. For small reduced basis dimensions, where the models have fewer parameters, NN presumably outperforms LLS due to its ability to model nonlinear responses. Lastly, we observe that SPSD-NN is significantly more accurate than SPSD-LLS and is competitive with both LLS and NN. This result demonstrates that we are able to generate a structure-preserving SPSD stiffness matrix model with a comparable accuracy to LLS and NN. It is interesting to observe that the improvement between SPSD-NN and SPSD-LLS is far greater than between NN and LLS.
§.§ Coupled FEM-ML results
We now examine results for when the ML model is coupled to the FEM solver. We consider 9 runs for β = {-0.005, -0.00475,-0.0045,-0.00425,-0.0025,-0.00125,-0.001,-0.00025,0.000}. This set of runs contains configurations that were in the training set as well as novel testing configurations. Figure <ref> shows the convergence of the relative error (summed over all cases) for the predicted x_1-reaction and x_3-reaction forces for the various models as a function of the reduced basis dimension. We immediately observe that SPSD-NN is by far the best performing method. We observe that it remains stable and becomes more accurate as the model complexity grows; the most complex model tested led to relative errors of around 0.2%. We next observe that LLS and NN quickly go unstable as the model complexity increases. Further, even when stable, both methods associate with high relative errors around 50%. Lastly, we observe that SPSD-LLS remains stable as the reduced basis dimension grows, but similarly to Figure <ref>, the model is unable to improve as we allow it to have more parameters. This lack of improvement can be attributed to the fact that the data are nonlinear and the SPSD-LLS model cannot be overfit to the data in the same way that LLS can due to the structure we impose on the model. The SPSD-NN model does yield improved results as the reduced basis dimension grows.
Figure <ref> shows the relative error of the best performing model of each method as a function of loading angle for the various cases; the best performing models correspond to K=3, K^*=6, K=3, and K^*=18 for the LLS, SPSD-LLS, NN, and SPSD-NN models, respectively. We note that contact occurs in the FEM for all cases when, approximately, |α| > 40. As expected is by far the best performing method and leads to relative errors of under 0.2% in all cases. is the next best performing method, but in general , LLS, and NN all associate with relatively high errors. This is particularly true when there is contact in which case all methods associate with >10% error when it comes to predicting the x_1-reaction force.
Figure <ref> shows the reaction forces predicted by the coupled FEM-ML models for all configurations.
We observe that is, by far, the best performing method and leads to predictions that lie on top of the truth values. is able to correctly capture contact and correctly predict the change in the stiffness. For larger reaction forces, which correspond to larger (absolute) loading angles, is able to capture the ramp-like behavior of the reaction force. The other nonlinear method, NN, is unable to characterize this behavior despite it performing similar to in training. For lower absolute loading angles, in which case contact does not occur and the response is mostly linear, is able to still predict the correct stiffness without being “contaminated" by its ability to capture the increased stiffness once contact occurs. All other models are unable to capture this behavior. Both LLS and , which are linear models, are unable to model the contact non-linearity and simply bisect its behavior. This leads to an under-prediction of the stiffness in the post-contact regime and an over-prediction of the stiffness in the pre-contact regime. The standard NN model attempts to capture the contact but clearly is subject to instabilities throughout the solve.
Figure <ref> shows contour plots of the max von Mises stress throughout the outer bushing as predicted by the coupled FEM-ML model with the best performing model (left) and FEM-only model (right). For this range of von Mises stress contours, the coupled FEM-ML model leads to results that are visually identical to the FEM-only model. Lastly, Figure <ref> reports the relative CPU times of the FEM–ML reduced-order models as compared to the FEM–only full-order model. We note that we only show CPU times for the and models given that LLS and NN are not stable for most configurations. We observe that the model results in speedups between 3-20x, depending on the model configuration, while results in speedups between 5-25x, again depending on the model configuration. Interestingly, we observe that the model is, on average, faster than the model. An investigation into this reveals that, in the conjugate gradient solver, converging the FEM–ML model with the model requires, on average, significantly fewer iterations than the model. This result suggests that the model is producing better conditioned stiffness matrices, and highlights adding penalties on the conditioning of the stiffness matrix as an interesting path for future work. Lastly, we highlight that we observe a slight increase in CPU time for as the reduced basis dimension is increased, suggesting that conditioning of the resulting stiffness matrix degrades as the basis dimension is increased. No such pattern is apparent for the model.
§.§ Fastener undergoing three-dimensional radial loading with contact and preload
The final problem we consider is again a fastener-bushing geometry, but this time we consider a three-dimensional loading profile with an initial preloaded state. The problem configuration is shown in Figure <ref>. The full FEM mesh comprises N = 234,750 degrees of freedom. We remove the immediate domain around the fastener, which results in an interface with = 6840 nodes. We again emphasize that contact occurs exclusively within the removed domain, and as a result our ML model must be able to accurately characterize contact.
We again employ homogeneous Dirichlet conditions on the bottom boundary. The quasi-static loading profile for on Γ_U is given, in inches, by
_1(t,) =
β t
_2(t,) =
α t
_3(t,) =
1/400 t.
We consider 121 training configurations for β = {-0.005 + 0.001 i}_i=0^10, α = {-0.005 + 0.001 i}_i=0^10, and t ∈{0.01i}_i=1^101 𝗌. This dataset comprises quasi-static trajectories with 101 quasi-static time steps per trajectory for pulls ranging from approximately -65^∘ to 65^∘ in both directions. In addition, for this example we consider a preloaded state such that, at time t=0, the initial force on the interface is 𝐟_𝗂𝗇𝗍𝖾𝗋𝖿𝖺𝖼𝖾 = 𝐟_𝗉𝗋𝖾𝗅𝗈𝖺𝖽 0. This preload is applied through the use of an artificial strain in the axial direction of the fastener.
We employ the same material configuration as the previous two exemplars. For ∈ the material model is characterized by a Young's modulus of 28.5e6 with Poisson's ratio of ν = 0.3. For ∈ the material model is characterized by a Young's modulus of 29 × 10^6 with a Poisson's ratio of ν=0.3. As a QoI, we again consider the integrated reaction forces on the exterior of the bottom bushing, but this time in the x_1, x_2, and x_3 directions.
For this complex exemplar we only investigate the performance of the model given its superior performance over other formulations. We additionally note that we employ a slightly different training configuration for this example: the batch size is 60 and our initial learning rate is 𝗅𝗋 = 2.5 × 10^-4. The rest is the same as is described in Section <ref>.
§.§ Offline training results
We again first assess the reduced basis approximations. To capture preload, the force offset vector is set to be the force state resulting from preload, i.e., = 𝐟_𝗉𝗋𝖾𝗅𝗈𝖺𝖽[The preload force vector is non-uniform and is extracted from the FEM simulation in the training phase. In future work, we plan to parameterize the preload force to handle different types of preload.]. Figure <ref> depicts the residual statistical energy associated with the reduced basis approximation. As compared to the previous examples, we observe a much slower decay in the residual statistical energy of the interface force field: 40 basis vectors are required to reach a residual statistical energy of 10^-6, while in the previous two exemplars this tolerance was achieved with only 10 basis vectors. This result demonstrates the more sophisticated physics present in this example. Despite the slower decay in singular values associated with the interface forces, we are still able to identify a subspace of a much reduced dimension capable of representing the majority of the statistical energy of the force and displacement interface fields.
§.§ Coupled FEM-ML results
We move directly to the coupled FEM–ML results. We consider a set of out-of-sample testing runs for β = {0.0005 + 0.001 i}_i=0^4, α = {0.0005 + 0.001 i}_i=0^4. Figure <ref> shows QoI predictions of K^*=50 across all testing samples. Focusing on the reaction force in the x_3 direction, we observe that the ML model is able to replicate the initial preloaded state by virtue of the constant offset vector. We further observe that the ML model is able to simulate the change in stiffness that occurs due to the initial preload around t ≈ 0.05. Next, examining the reaction forces in the x_1 and x_2 directions, we observe the ML model is able to accurately characterize the change in stiffness due to contact. The model correctly predicts both when contact occurs, as well as the magnitude of the resulting reaction force. We emphasize that all results are for out-of-sample configurations. Next, Figure <ref> shows the convergence of the reaction QoI errors as a function of the reduced basis dimension. We observe monotonic convergence in both the x_1 and x_2 reaction errors, and almost monotonic convergence in the x_3 reaction error.
Figure <ref> presents the relative wall-times of the various FEM–ML coupled models as compared to the FEM-only model. We observe that, for all basis dimensions, the FEM–ML coupled systems result in over 100x speedups. We again do not observe a noticeable change in cost with increasing the reduced basis dimension. Lastly, Figure <ref> shows contour plots for the max von Mises stress as predicted by the FEM–ML, with the model at a reduced basis dimension of =40, and FEM-only models. No differences in results are distinguishable.
§ CONCLUSION
This work introduced a machine-learning strategy for finite element analysis of solid mechanics wherein we replaced hard-to-resolve portions of a computational domain with a data-driven surrogate. We proposed two types of data-driven surrogates: one that maps directly from the interface displacements to the interface forces, and one that maps from the interface displacements to the interface forces by virtue of a stiffness matrix that is enforced to be SPSD. We demonstrated, in a simplified setting, that this latter formulation results in a global coarse-scale problem that is symmetric positive-definite (SPD) which guarantees a unique solution and makes the form amenable to conjugate gradient solvers.
We presented numerical experiments across three exemplars spanning a range of physics. These exemplars demonstrated that, in general, direct force-to-displacement models are not robust when combined with a conjugate gradient solver. At this time it is not clear if this lack of robustness is due to interplay between the model form and the conjugate gradient solver, or if rather it is due to the resulting problem being ill-posed. Even if model robustness can be attributed to the conjugate gradient solver, in which case improved performance could be obtained, e.g., by switching to a general minimized residual-based solver, this is still a significant issue given the wide use and effectiveness of conjugate gradient-based solvers in SM.
Our numerical experiments demonstrated that the displacement-to-force via an SPSD stiffness matrix approach resulted in robust and efficient models. In all of our testing cases, the FEM-ML model with the approach was able to predict QoIs with a relative error of less than 0.5%. On the final exemplar, which was the most complex test case, this formulation led to sub 0.25% errors with speed-ups of over 100x as compared to the standard FEM simulation.
The results of this work demonstrate the promise of using machine-learning for alleviating the computational burden associated with hard-to-resolve portions of a computational domain, like threaded fasteners, for traditional finite element methods. Follow on work will focus on aspects including (1) extension to more sophisticated material models, e.g., elastic-plastic, where history behavior may become important, (2) extension to component-level exemplars with fasteners, where the same fastener model is repeated throughout the domain, (3) adding additional physical constraints to our models, e.g., rigid body rotation, and (4) prediction of QoIs within the substituted subdomain, such as fastener stress and fastener failure.
§ ACKNOWLEDGEMENTS
This paper describes objective technical results and analysis.
The authors acknowledge support from Sandia Advanced Simulation and Computing projects 65755 and 103723.
Any subjective views or opinions that might be expressed in the paper do
not necessarily represent the views of the U.S. Department of Energy
or the United States Government. Sandia National Laboratories is a
multimission laboratory managed and operated by National Technology &
Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of
Honeywell International Inc., for the U.S. Department of Energy’s
National Nuclear Security Administration under contract DE-NA0003525.
siam
§ PYTORCH CODE
This appendix provides the PyTorch code used for defining the neural network and symmetric positive definite neural network models. Listing <ref> provides the PyTorch code for the vanilla neural network architecture, while Listing <ref> provides code for the SPSD neural network architecture.
[language=Python,caption=Code for neural network architecture used for the Neural Network surrogate model.,label=listing_one]
import numpy as np
import torch
import torch.nn.functional as F
import torch.nn as nn
torch.set_default_dtype(torch.float64)
class NN(nn.Module):
def __init__(self,nHiddenLayers,nNeuronsPerLayer,nFeatures,nOutputs):
super(NN, self).__init__()
forward_list = []
self.numHiddenLayers = nHiddenLayers
self.numLayers = self.numHiddenLayers + 1
dim = np.zeros(nHiddenLayers+2,dtype='int')
dim[0] = nFeatures
for i in range(1,nHiddenLayers+1):
dim[i] = nNeuronsPerLayer
dim[-1] = nOutputs
self.dim = dim
input_dim = dim[0:-1]
output_dim = dim[1::]
for i in range(0,nHiddenLayers+1):
forward_list.append(nn.Linear(input_dim[i], output_dim[i]))
self.forward_list = nn.ModuleList(forward_list)
self.activation = F.relu
def forward(self,x):
for i in range(0,self.numLayers-1):
x = self.activation(self.forward_list[i](x))
x = self.forward_list[-1](x)
return x
[language=Python,caption=PyTorch code for neural network architecture used for the SPSD Neural Network surrogate model.,label=listing_two]
import numpy as np
import torch
import torch.nn.functional as F
import torch.nn as nn
torch.set_default_dtype(torch.float64)
class SPD_NN(nn.Module):
def __init__(self,nHiddenLayers,nNeuronsPerLayer,nFeatures,nOutputs,BoxInit=False):
super(SPD_NN, self).__init__()
forward_list = []
assert(nFeatures == nOutputs)
idx = np.tril_indices(nOutputs)
self.idx = idx
self.nOutputs = nOutputs
networkOutputSize = idx[0].size
self.numHiddenLayers = nHiddenLayers
self.numLayers = self.numHiddenLayers + 1
dim = np.zeros(nHiddenLayers+2,dtype='int')
dim[0] = nFeatures
for i in range(1,nHiddenLayers+1):
dim[i] = nNeuronsPerLayer
dim[-1] = networkOutputSize
self.dim = dim
input_dim = dim[0:-1]
output_dim = dim[1::]
for i in range(0,nHiddenLayers+1):
forward_list.append(nn.Linear(input_dim[i], output_dim[i]))
self.forward_list = nn.ModuleList(forward_list)
self.activation = F.relu
def forward(self,x):
y = x*1.
for i in range(0,self.numLayers-1):
y = self.activation(self.forward_list[i](y))
y = self.forward_list[-1](y)
K = torch.zeros(y.shape[0],self.nOutputs,self.nOutputs)
K[:,self.idx[0],self.idx[1]] = y[:]
KT = torch.transpose(K,2,1)
K = torch.matmul(K,KT)
result = torch.einsum('ijk,ik->ij',K,x)
return result[:,:]
§ PROPER ORTHOGONAL DECOMPOSITION
Algorithm <ref> provides the algorithm for computing the POD basis
used in this work. We note that the basis dimension K can be determined
from the decay of the singular values; for simplicity, we treat it as an
algorithm input.
The accuracy of the POD approximation can be bounded by energy contained in the truncated singular values, ∑_k = K^N_s_kk^2.
|
http://arxiv.org/abs/2307.04681v1 | 20230710163455 | The matrix permanent and determinant from a spin system | [
"Abhijeet Alase",
"Owen Doty",
"David L. Feder"
] | quant-ph | [
"quant-ph"
] |
Quantum Science Group, The University of Sydney, NSW 2006,
Australia
Institute for Quantum Science and Technology and Department of
Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4, Canada
Institute for Quantum Science and Technology and Department of
Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4, Canada
In contrast to the determinant, no algorithm is known for the
exact determination of the permanent of a square matrix that runs in time
polynomial in its dimension. Consequently, non-interacting fermions are
classically efficiently simulatable while non-interacting bosons are not,
underpinning quantum supremacy arguments for sampling the output distribution
of photon interferometer arrays. This work introduces a graph-theoretic
framework that bridges both the determinant and permanent. The only non-zero
eigenvalues of a sparse non-Hermitian operator M̆ for n spin-1/2
particles are the nth roots of the permanent or determinant of an n× n
matrix M, interpreting basis states as bosonic or fermionic occupation
states, respectively. This operator can be used to design a simple and
straightforward method for the classical determination of the permanent that
matches the efficiency of the best-known algorithm. Gauss-Jordan elimination
for the determinant of M is then equivalent to the successive removal of the
generalized zero eigenspace of the fermionic M̆, equivalent to the
deletion of some nodes and reweighting of the remaining edges in the graph such
that only n nodes survive after the last step. In the bosonic case, the
successive removal of generalized zero eigenspaces for M̆ is also
equivalent to node deletion, but new edges are added during this process, which
gives rise to the higher complexity of computing the permanent. Our analysis
may point the way to new strategies for classical and quantum evaluation of the
permanent.
The matrix permanent and determinant from a spin system
David L. Feder
August 12, 2023
=======================================================
§ INTRODUCTION
The permanent of a square matrix M of dimension n is the symmetric analogue
of
the usual determinant, but where the signatures of the permutations (the signs
appearing in the expansion of the function) are ignored. This quantity appears
in a wide variety of applications in pure mathematics and in physics, among
other disciplines. For example, the permanent enumerates the number of
perfect matchings of a bipartite graph, which has applications in
combinatorics <cit.>, chemistry <cit.>, and
physics <cit.>. The permanent arises in the identification of
multiple targets <cit.>, with applications to defense. In the
context of quantum computation and
information, the permanent is central to calculating matrix elements in linear
optics for many-photon systems <cit.>,
and for determining the entanglement of various permutation-invariant quantum
states <cit.>.
Despite the fact that both the permanent and the determinant yield the same
exponential number of terms, n!∼√(2π n)(n/e)^n for large n, the
determinant is efficiently
computable classically, i.e. scales as a polynomial in n. The well-known
Gaussian elimination approach scales as O(n^3), and the fastest current
algorithm scales as O(n^2.373) <cit.>.
In contrast, determining the permanent of a general matrix is
#P-hard, and that of a (0,1) matrix is
#P-complete <cit.>.
The discovery of a classically efficient algorithm for the permanent would have
profound consequences for the theory of computation, including
= <cit.>, an even stronger statement than the
famous = conjecture. The runtime of the fastest known algorithm,
namely Ryser's algorithm, scales as O(n2^n) <cit.>. That said, the
permanent P_n of matrices with non-negative entries or with vanishing mean
can be approximated in polynomial time (n,1/ϵ) using
randomized algorithms <cit.>, up to additive error
ϵ P_n, for arbitrary ϵ>0; likewise for positive semidefinite
matrices <cit.>.
The #P-hardness of computing the permanent was recast in the framework of
linear optics <cit.>, which motivated the realization that quantum
devices will always outperform classical algorithms in sampling the output
distribution of photons emerging from an optical interferometer apparatus, the
so-called Boson Sampling problem <cit.>. Numerous Boson Sampling
experiments have been conducted since then;
Refs. Arute2019,Zhong2020,Madsen2022 provide some recent examples.
In contrast, the ease of calculating the determinant implies that
non-interacting fermions are efficiently simulatable on a classical
computer <cit.>.
It was recently shown that the permanent of the matrix M can be computed as
the determinant of a family of matrices M̆ of minimum dimension
2^n-1 <cit.>. These matrices define
the adjacency of a directed n-dimensional hypercube graph, whose edge weights
correspond to elements of the matrix of interest, and with the first and last
vertices sharing the same label to form a cycle. It was subsequently noted that
these graphs encode an algebraic branching program <cit.>: the
product of edge weights on each of the n! possible branches corresponds to a
term in the expansion of the permanent.
The present work builds on the above construction by identifying a key feature:
the structure of the matrix M̆ coincides with the dynamics of n
spin-1/2 particles governed by a non-Hermitian operator. If the permanent
of M is non-zero, then the only non-zero eigenvalues of M̆ are the
nth roots of the permanent; alternatively, M̆^n diagonalizes into
n blocks labeled by the total spin, each of which has the permanent as the
only non-zero eigenvalue. Thus, the n-fold product of M̆ on a
fiducial state such as |0^⊗ n⟩ immediately yields
P_n|0^⊗ n⟩. The
n-sparsity of M̆ ensures that this can be effected on a classical
computer with n2^n arithmetic operations, matching the performance of Ryser's algorithm.
Interpreting the basis states as bosonic occupation states
yields the standard expression for the permanent in terms of products of
(hard-core) bosonic operators. Interpreting these instead as fermionic
occupation states immediately yields the determinant, with signed edge weights
in the graph. If M is a full-rank matrix, then Gaussian elimination for the
calculation of the determinant corresponds to successively projecting out the
generalized zero eigenvectors of M̆, so that after n iterations the
initial rank-deficient matrix of dimension 2^n-1 is reduced to an
n-dimensional full-rank matrix. From the perspective of the algebraic
branching program, each iteration deletes vertices and the edges incident to
them, and reweights the remaining edges, until only one path remains in the
cycle. This approach uncovers another close connection between fermions and the
determinant on the one hand, and between bosons and the permanent on the other.
This paper is organized as follows. The permanent and determinant are reviewed
in Sec. <ref>, and an example is provided for the representation of
the permanent as an algebraic branching program. Sec. <ref>
introduces the spin model that maps the problem of computing the permanent of
an n× n matrix M to the problem of computing the eigenvalues of a
2^n× 2^n matrix M̆, and provides a classical algorithm for
computing the permanent that matches the best current methods. The spin model
is expressed in terms of non-interacting fermions and hard-core bosons in
Sec. <ref>. In Sec. <ref>, we discuss the connection
between Gaussian elimination, generalized zero eigenspaces of M̆ and
its visualization on the associated graph. The prospects for the development of
a quantum algorithm for computing the permanent based on our approach are
discussed in Sec. <ref>.
§ REVIEW
§.§ Permanent and determinant
Consider the n× n matrix M, defined as
M=[ w_0,0 w_0,1 ⋯ w_0,n-1
w_1,0 w_1,1 ⋯ w_1,n-1⋮ ⋮ ⋱ ⋮
w_n-1,0 w_n-1,1 ⋯ w_n-1,n-1 ].
The determinant and permanent of M are respectively defined as
D_n = |M|=(M)≡∑_σ∈ S_n((σ)
∏_i=0^n-1w_i,σ_i);
P_n = |M|_P=(M)≡∑_σ∈ S_n(
∏_i=0^n-1w_i,σ_i),
where S_n is the symmetric group on the list {0,1,2,…,n-1}, σ
is a function that reorders this list (effects a permutation of the elements),
σ_i is the ith entry of the list after permutation, and
(σ)=(-1)^N(σ) is the signature of the permutation,
where N(σ) is the number of inversions needed. While the expansion of
the determinant and permanent includes the same n! terms, the signs appearing
in the determinant allow for its efficient evaluation.
While exceedingly simple, the n=3 case is illustrative and will be revisited
throughout this work. The determinant is explicitly written
|M| = w_0,0(w_1,1w_2,2-w_1,2w_2,1)
- w_0,1(w_1,0w_2,2-w_1,2w_2,0)
+ w_0,2(w_1,0w_2,1-w_1,1w_2,0).
The Gaussian elimination algorithm uses pivoting to reduce the matrix to
row echelon form (i.e. an upper triangular matrix), so that the determinant is
the product of the diagonal elements. For reasons that will become clear in
Sec. <ref>, consider instead a reduction to a lower triangular
matrix. The first reduction yields
|M|=|
w_0,0' w_0,1' 0
w_1,0' w_1,1' 0
w_2,0 w_2,1 w_2,2|,
where
w_0,0' = w_0,0-w_0,2w_1,0/w_1,2;
w_0,1'=w_0,1-w_0,2w_1,1/w_1,2;
w_1,0' = w_1,0-w_1,2w_2,0/w_2,2;
w_1,1'=w_1,1-w_1,2w_2,1/w_2,2.
The second and last reduction yields
|M|=|
w_0,0” 0 0
w_1,0' w_1,1' 0
w_2,0 w_2,1 w_2,2|,
where
w_0,0”=w_0,0'-w_0,1'w_1,0'/w_1,1'.
The determinant is then
|M| = w_0,0”w_1,1'w_2,2
=(w_0,0'w_1,1'-w_0,1'w_1,0')w_2,2
= (w_0,0-w_0,2w_1,0/w_1,2)
(w_1,1-w_1,2w_2,1/w_2,2)
- (w_0,1-w_0,2w_1,1/w_1,2)
(w_1,0-w_1,2w_2,0/w_2,2)w_2,2.
While there are eight terms in the expansion, the signs the two cross terms
(-w_0,2w_1,0/w_1,2)(w_1,1)w_2,2
and -(-w_0,2w_1,1/w_1,2)(w_1,0)w_2,2
cancel, leaving 6 unique terms in the expansion.
The sign structure of the determinant guarantees that these cancellations occur
for all values of n, which ensures that Gaussian elimination is
classically efficient. For the evaluation of the permanent, one cannot follow
the same procedure as above by simply eliminating all signs, because the cross
terms arising from expanding the final product [for example in
Eq. (<ref>)] will now add instead of cancelling. Our analysis
in Sec. <ref> provides insight into why this is the case.
§.§ Permanent as an algebraic branching program
Building on the work of Grenet and
others <cit.>, Hüttenhain and
Ikenmeyer <cit.> noted that the matrix permanent for n=3 can
be expressed as a binary algebraic branching program. The n! terms correspond
to branches, or routes, traversing between antipodes of the n-dimensional
hypercube, such that the product of edge weights for each branch corresponds to
a term in the expansion of the permanent. Fig. <ref> illustrates the
idea for the n=3 case, where the three main branches from the top to bottom
vertices (labeled in red) are explicitly shown. The edge weights are chosen so
that their products for each branch correspond to a term in the permanent;
c.f. Eq. (<ref>) with signs removed. The branching program is the
analog of the expansion of the determinant by matrix minors.
§ SPIN MODEL
§.§ Definition and structure
The binary algebraic branching program for the 3× 3
permanent <cit.> suggests a general construction for arbitrary
n. Suppose one has a system of spin-1/2 particles, located on sites
j=0,1,…,n-1. Each particle can access states |0⟩ and
|1⟩, corresponding to spin down and spin up respectively. The spin
model that is the central focus of the current work is defined by the operator
M̃=∑_ i∑_j=0^n-1w_h( i),jσ^+_j
| i⟩⟨ i|+∏_jσ^-_j,
where σ^+_i=|1⟩⟨ 0|_i and σ^-_i=|0⟩⟨ 1|_i
are site-dependent raising and lowering operators. The first sum is over all
n-bit strings i so that a complete and orthonormal basis of n-spin
states with dimension 2^n is represented by the unit vectors | i⟩
=|{0,1}⟩^⊗ n. The Hamming weight of the bitstring is denoted
by h( i), coinciding with the total n-particle spin. Evidently the
last term in Eq. (<ref>) is equivalent to
| 0⟩⟨ 1|.
The operator M̃ defined by Eq. (<ref>) corresponds to
the adjacency matrix for a weighted directed graph that effects transitions
from the | 0⟩ state to the | 1⟩ state via all
possible
single-spin raising operations, and then back to | 0⟩ again to
complete one cycle. The transition amplitudes are indexed by two integers: the
total Hamming weight of the initial state and the target site. With
σ^+|1⟩=0, the second index can never be repeated as the value of
first index increases; thus, the first term in M̃ encodes all possible
transitions from | 0⟩ to | 1⟩ without repetitions.
Fig. <ref>(a) depicts M̃ for n=3, and includes the
vertex / state labelings for clarity. The orientation is chosen so that each
horizontal layer of the hypercube contains vertices labeled by bitstrings with
the same Hamming weight h.
As discussed in detail in what follows, it is convenient to define an
alternate encoding of the cyclic behavior of M̃ by eliminating
the transition | 0⟩⟨ 1|, and instead directly
transition from states with Hamming weight n-1 to the state
| 0⟩. The associated operator is
M̆=∑_ i'∑_j=0^n-1w_h( i),jσ^+_j
| i⟩⟨ i|
+∑_j=0^n-1w_n-1,j| 0⟩⟨ 1|σ_j^+,
where the prime on the first term denotes that the sum is over all bitstrings
but not including those with Hamming weight h( i)=n-1. In this case, the
basis state | 1⟩ is never occupied, and the Hilbert space
dimension is reduced to 2^n-1. This alternate operator is depicted in
Fig. <ref>(b).
Consider next the (n+1)th (nth) power of M̃ (M̆), which
will be of central importance in what follows. The derivation is provided in
Appendix <ref>, and the result for M̃^n+1 is given in
Eq. (<ref>):
M̃^n+1 = ∑_j_0,…,j_n-1[
(w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)| 0⟩⟨ 1|.
+ (w_0,j_0σ^+_j_0)⋯(w_n-2,j_n-2σ^+_j_n-2)| 0⟩⟨ 1|
×(w_n-1,j_n-1σ^+_j_n-1)
+ …+.| 0⟩⟨ 1|(w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)].
The expression for (M̆)^n is identical save for the leading term in
Eq. (<ref>).
The above expression can be seen to be of block diagonal form as follows. Each
term in the expansion above is defined by the operators
∏_r=0^m-1σ^+_j_r| 0⟩⟨ 1|
∏_s=m^n-1σ^+_j_s, and is labeled by the index m=0,1,…,n.
For m=0, the operator is only | 0⟩⟨ 0|, i.e. a block
of dimension 1 defined by a single basis state with zero Hamming weight. The
m=1 case includes all operators of the form
σ^+_j_0| 0⟩⟨ 1|∏_s=1^n-1σ^+_j_s
=σ^+_j_0| 0⟩⟨ 1|∏_s=0^n-1σ^+_j_sσ^-_j_0
=σ^+_j_0| 0⟩⟨ 0|σ^-_k_0,
which corresponds to a block spanned by the n basis vectors defined by
σ^+_j| 0⟩, which are labeled by all bitstrings with unit
Hamming weight. Evidently, each block is indexed by the Hamming weight (or
total spin) m, and has dimension given by the binomial factor
(n m). It is convenient to express M̃^n+1 as the
direct sum
M̃^n+1=M̃_0⊕M̃_1⊕⋯⊕M̃_n
=⊕_m=0^nM̃_m,
where M̃_m corresponds to the block matrix labeled by m and is
defined as
M̃_m=M̃^m| 0⟩⟨ 0|M̃^n-m+1,
which has the form
M̃_m = ∑_j_0,…,j_n-1(w_0,j_0σ^+_j_0)
(w_1,j_1σ_j_1^+)⋯
× (w_m-1,j_m-1σ_j_m-1^+)| 0⟩⟨ 1|(w_n-1,j_n-1σ^+_j_n-1)
× (w_n-2,j_n-2σ^+_j_n-2)⋯(w_m,j_mσ^+_j_m),
as proven as Eq. (<ref>) in Appendix <ref>. Likewise,
M̆_m=M̆^m| 0⟩⟨ 0|M̆^n-m.
§.§ Eigensystem
Now turn to the eigenvalues and eigenvectors of the spin model, defined by
Eq. (<ref>) or its alternative expression Eq. (<ref>). A key
observation is that | 0⟩ is an eigenvector of M̃^n+1
or M̆^n. Consider the action of M̃^n+1 on
the state | 0⟩, which only involves the m=0 block:
M̃^n+1| 0⟩ = ∑_j_0,…,j_n-1
| 0⟩⟨ 1|(w_0,j_0w_1,j_1⋯ w_n-1,j_n-1)
×σ^+_j_0σ^+_j_1⋯σ^+_j_n-1
| 0⟩.
The action of σ^+_j_0σ^+_j_1⋯σ^+_j_n-1 defines
all possible n spin-flip paths from the | 0⟩ state to
| 1⟩, and each is weighted by the factor
w_0,j_0w_1,j_1⋯ w_n-1,j_n-1. This is precisely the algebraic
branching program discussed in Sec. <ref>; thus
M̃^n+1| 0⟩=M̃_0| 0⟩=P_n| 0⟩;
the eigenvalue is the permanent of M. Likewise,
M̆^n| 0⟩=P_n| 0⟩.
The permanent is also an eigenvalue of every other block of M̃^n+1.
Defining the block-m state
|ψ_m⟩=M̃^m| 0⟩,
one obtains
M̃_m|ψ_m⟩ = M̃^m| 0⟩⟨ 0|
M̃^n-m+1M̃^m| 0⟩
= M̃^m| 0⟩⟨ 0|M̃^n+1| 0⟩
=P_nM̃^m| 0⟩
= P_n|ψ_m⟩.
The operator M̃^n+1 therefore has n+1 degenerate eigenvalues
corresponding to the permanent, with associated eigenvectors
|ψ_m⟩=M̃^m| 0⟩. Likewise, the operator
M̆^n has n degenerate eigenvalues P_n and
associated eigenvectors M̆^m| 0⟩.
For the rest of the discussion in this section, we assume that P_n 0.
Because M̃ is a cycle, if λ is an eigenvalue of
M̃^n+1, then the eigenvalues λ_j of M̃ must include
all (n+1)th roots of λ (see for example
Ref. Watkins2004). For the present case λ=P_n, one
obtains λ_j=P_n^1/(n+1)e^-i2π j/(n+1), j=0,1,…,n; likewise,
the eigenvalues of M̆ are (P_ne^-i2π j)^1/n,
j=0,1,…,n-1. Given that M̆ has degeneracy n and therefore
only requires n powers to return the state | 0⟩ to itself, it is
slightly more convenient to work with M̆ in what follows.
The eigenvectors of M̆ with eigenvalues corresponding to the nth
roots of P_n can be written as
|ϕ_n(k)⟩=e^-i2π k/n∑_j=0^n-1e^i2π jk/n(M̆/P_n^1/n)^j|0^⊗ n⟩,
where k,j=0,1,….n-1. The corresponding eigenvalues can be found
directly:
M̆|ϕ_n(k)⟩=e^-i2π k/n∑_j=0^n-1e^i2π jk/n
P_n^1/n(M̆/P_n^1/n)^j+1|0^⊗ n⟩
=e^-i4π k/nP_n^1/n∑_j=0^n-1e^i2π(j+1)k/n(M̆/P_n^1/n)^j+1|0^⊗ n⟩
=e^-i4π k/nP^1/n∑_j=1^ne^i2π jk/n(
M̆/P_n^1/n)^j|0^⊗ n⟩
=e^-i4π k/nP_n^1/n∑_j=0^n-1e^i2π jk/n(
M̆/P_n^1/n)^j|0^⊗ n⟩
+|0^⊗ n⟩-|0^⊗ n⟩
=e^-i2π k/nP_n^1/n|ϕ_n(k)⟩.
The eigenvalues are therefore
λ_k(M̆)=e^-i2π k/nP_n^1/n
=(e^-i2π kP_n)^1/n.
The derivation proceeds analogously for M̃, and one obtains
λ_k(M̃)=(e^-i2π kP_n)^1/(n+1).
The simplest case corresponds to k=0:
|ϕ_n(0)⟩=∑_j=0^n-1(M̆/P_n^1/n)^j
|0^⊗ n⟩,
with eigenvalue λ_0=P_n^1/n. Consequently, ±λ_0 would be the only real eigenvalues
if the elements of M were real and positive.
Remarkably, Eq. (<ref>) and (<ref>) constitute
the only non-zero eigenvalues of M̆ and M̃, respectively.
The periodic nature of M̃ and M̆ gives rise to eigenvectors
that are expanded in a Fourier-like series, much like in a translationally
invariant system. In Eq. (<ref>) and Eq. (<ref>),
the indices j and k label `position' and `wavevector', respectively. In the
present case, the position is the index of the block, corresponding to the
Hamming weight or total spin, while the `wavevector' serves essentially the
same purpose as in uniform systems: as a canonically conjugate quantum number.
Conceptually, one can consider successive applications of M̃ or
M̆ as moving a walker from `site' | 0⟩ to `site'
M̃| 0⟩ or M̆| 0⟩, etc., one step (bit
flip) at a time, with all states sharing a given Hamming weight treated as
equivalent, until it again reaches its starting state (see also
Fig. <ref>).
Given that the determination of the matrix permanent corresponds to an
algebraic branching program from the state | 0⟩ to itself,
effecting the spin transitions in the opposite direction (i.e. reversing the
arrows in Fig. <ref>) corresponds to taking the adjoint (complex
conjugate transpose) of M̃ or M̆. Eq. (<ref>) then
becomes
(M̃^†)^n+1| 0⟩
=(M̆^†)^n| 0⟩=P_n^*| 0⟩.
One can then construct Hermitian operators
M̃_R = M̃^n+1+(M̃^†)^n+1,
M̃_I = i[M̃^n+1-(M̃^†)^n+1],
satisfying the eigenvalue equations
M̃_R| 0⟩=(P_n)| 0⟩;
M̃_I| 0⟩=(P_n)| 0⟩.
Similar expressions apply to M̆. While the
operators (<ref>) and (<ref>) are arguably more physical,
their experimental realization could remain challenging given the complexity of
the description, Eq. (<ref>). Also, unlike the case for M̃^n+1
or M̆^n alone, the non-zero eigenvalues for the
remaining blocks of (<ref>) and (<ref>) are different from
(P_n) and (P_n).
§.§ Classical Algorithm for the permanent
While the result (<ref>) is a statement about the eigenvalues, it
suggests a straightforward approach to the calculation of the permanent without
needing to determine the spectrum of M̃ or M̆. Rather, one
must only compute
M̃^n| 0⟩=P_n| 1⟩ M̃^n+1| 0⟩=P_n| 0⟩; M̆^n| 0⟩=P_n| 0⟩.
In other words, apply M̃ or M̆ successively to the state
| 0⟩ until all the amplitude is again concentrated on the state
| 0⟩, and read out the result.
The algorithm for the permanent then corresponds to an n-fold or (n-1)-fold
product of matrices with dimension (n i+1)×(n i) (i={0,1,…,n-1}). Each column of the ith
matrix contains exactly n-i non-zero elements, so that the matrices are
exponentially sparse. The total number of operations (multiplications and
additions) is
∑_i=1^n(n i)(2i)=n2^n.
In comparison, Ryser's algorithm requires a total of n2^n+1-(n+1)^2∼ 2n2^n operations
for large n <cit.>. The scaling of the number of operations in the
present case therefore matches that of the fastest-known algorithm, with a
straightforward implementation, which could make it useful for practical applications.
§ FERMIONIC AND BOSONIC REPRESENTATIONS
The spin model (<ref>) can be naturally represented in terms of
Schwinger bosons, and fermions via the Jordan-Wigner transformation. These are
discussed in the next two subsections.
§.§ Bosons
Spin-1/2 particles can be mapped to Schwinger bosons as follows:
σ_j^+ = σ_j^x-iσ_j^y=a_j^†b_j;
σ_j^- = σ_j^x+iσ_j^y=b_j^†a_j;
σ_j^z = a^†_ja_j-b^†_jb_j.
Each spin operator therefore involves two `species' of bosons, satisfying the
relations
[a_i,a_j^†]=δ_ij; [a_i,a_j] =[a_i^†,a_j^†]=0
and likewise for b-species bosons,
where [x,y]=xy-yx is the commutator. These are supplemented with the
unit-occupancy condition
a_j^†a_j+b^†_jb_j=1,
which specifies that each site is occupied by exactly one boson of either
species. The Schwinger approach therefore maps spins to hard-core two-species
bosons at exactly unit filling. The model (<ref>) expressed in terms of
Schwinger bosons is then
M̃_b=∑_ i∑_j=0^n-1w_h( i),ja_j^†b_j
| i⟩⟨ i|+∏_jb_j^†a_j,
where the bit in the string i is unity (zero) if occupied by a boson
of species a (b), and the zero state is
| 0⟩=∏_j=0^n-1b_j^†|𝒪⟩.
The graph associated with M̃_b is indistinguishable from that of
M̃, i.e. Fig. <ref> for n=3.
It is instructive to write the action of the nth power of M̃_b on
the zero state:
M̃_b^n| 0⟩ = (∑_j=0^n-1w_n-1,ja_j^†b_j)
(∑_j=0^n-1w_n-2,ja_j^†b_j)
× ⋯×(∑_j=0^n-1w_0,ja_j^†b_j)
∏_j=0^n-1b_j^†|𝒪⟩
= (∑_j=0^n-1w_n-1,ja_j^†)
(∑_j=0^n-1w_n-2,ja_j^†)
× ⋯×(∑_j=0^n-1w_0,ja_j^†)
|𝒪⟩.
In the second line above, all operators for the b-species bosons can be
omitted because each creation of an a-species boson must be accompanied by
the annihilation of a b-species boson, and after n powers of M̃_b
all sites have been accounted for. Furthermore, the hard-core condition acts in
the same way as a Pauli exclusion principle: if a b-species boson occupies
site j, then the b_j^† operator returns zero. Expansion of the terms
in Eq. (<ref>) then returns the permanent because the b-species
bosons all commute.
§.§ Fermions
The Jordan-Wigner transformation corresponds to mapping the spin operators to
`spinless' fermions:
σ_j^+ = exp(iπ∑_k=j+1^n-1f_k^†f_k)f_j^†;
σ_j^- = exp(iπ∑_k=j+1^n-1f_k^†f_k)f_j;
σ_j^z = 2f_j^†f_j-1,
where the site-dependent fermionic creation (f^†_j) and annihilation
(f_j) operators satisfy the anticommutation relations
{f_i,f_j^†}=δ_ij; {f_i,f_j} ={f_i^†,f_j^†}=0,
and {x,y}=xy+yx. The first of these automatically ensures the Pauli
condition forbidding double occupancy of sites; thus, basis states can
therefore again be indexed by bitstrings i, but now where 0 (1)
signifies the absence (presence) of a fermion at position j. Canonical
ordering is assumed, where creation operators appear with indices in descending
order; for example
|1010⟩=f_2^†f_0^†|𝒪⟩,
where |𝒪⟩ denotes the particle vacuum.
The phases appearing in Eq. (<ref>) ensure that the fermions anticommute
on all sites; alternatively, they ensure the normal / canonical ordering of
basis states. For example:
f_0^†|1010⟩ = f_0^†f_2^†f_0^†|𝒪⟩=0;
f_1^†|1010⟩ = f_1^†f_2^†f_0^†|𝒪⟩=-f_2^†f_1^†f_0^†|𝒪⟩=-|1110⟩;
f_3^†|1010⟩ = f_3^†f_2^†f_0^†|𝒪⟩=|1011⟩.
The Jordan-Wigner transformation (<ref>) counts the number of fermions to
the right of (i.e. with index greater than) where the spin is flipped /
fermion is created, and multiplies the transition amplitude by -1 if this
number is odd. In this way, the negative signs arising from the fermionic
anticommutation are cancelled and the transition amplitudes all remain
positive. The model (<ref>) expressed in terms of fermions then becomes
M̃_ JW=∑_ i∑_j=0^n-1w_h( i),js_ i,j
f_j^†| i⟩⟨ i|+∏_jf_j,
where the function s_ i,j incorporates the Jordan-Wigner phases for
creation of a fermion at position j on a basis state with occupation indexed
by occupation state | i⟩ defined by bitstring i.
An explicit example is shown for the n=3 case in
Fig. <ref>(a). Consider the |100⟩→|110⟩ and
|010⟩→|110⟩ transitions. For the former transition, a fermion
is created in site 1, to the right of a fermion already in site 0, so there is
no additional Jordan-Wigner phase; likewise, the final state
|110⟩=f_1^†f_0^†|𝒪⟩ is already normal
ordered. Thus, the edge weight w_1,1 remains unchanged. For the latter
transition, the fermion created in site 0 is to the left of a fermion already
in site 1, which yields a negative contribution from the Jordan-Wigner
transformation, reflected in the signed edge weight -w_1,0 in
Fig. <ref>(a). At the same time, the final state
|110⟩=f_0^†f_1^†|𝒪⟩ requires one fermionic
anticommutation to bring it back to normal ordering, which cancels the negative
sign and effectively restores the total edge weight to its original value in
the spin representation. Thus, within the context of a binary branching
process, the sum of the path weights of Fig. <ref>(a) still
constitute the permanent, despite the appearance of signed edge weights.
To construct an algebraic branching program for true fermions one either must
maintain all edge weights and keep track of the fermionic anticommutation
relations defining the occupation states, in which case the model is
M̃_f=∑_ i∑_j=0^n-1w_h( i),jf_j^†
| i⟩⟨ i|+∏_jf_j
and | i⟩ represent occupation states; or one must account for all
Jordan-Wigner phases to appropriately sign all edge weights but treat the
states | i⟩ instead as ordinary bitstrings, in which case the
model is instead
M̃_f, alt=∑_ i∑_j=0^n-1w_h( i),j
s_ i,jσ^+_j| i⟩⟨ i|+∏_jσ^-_j,
and now the σ^± are interpreted as classical bit-flip operators.
When expressing the fermionic model in terms of creation and annihilation
operators, Eq. (<ref>) is preferable, but
Eq. (<ref>) is more convenient in the graph adjacency matrix
representation. Now, Fig. <ref>(a) depicts a truly signed
binary branching
process, and the sum of the path weights constitute the determinant, rather
than the permanent, of M for n=3. The w_0,0w_1,1w_2,2 path serves
as the reference, where the second indices for the weights in this product
constitute the integer list {012}. All other paths are characterized by an
overall minus (plus) sign if the integer list derived from the second index of
the weights for that path corresponds to an even (odd) number of inversions
of the reference list; for example, the odd permutations {021}, {102},
and {210} correspond to paths with negative total weights
-w_0,0w_1,2w_2,1, -w_0,1w_1,0w_2,2, and
-w_0,2w_1,1w_2,0, respectively. Thus, one expects that the eigenvalues
of the operator (<ref>) are the determinant, as will be
discussed further below.
Another noteworthy property of the fermionic graph is that it is unbalanced:
there is no vertex sign switching that can remove all of the minus
signs <cit.>; alternatively
expressed, there is no diagonal matrix whose entries are {1,-1} that can
map Eq. (<ref>) to a form without any s_ i,j factors.
(This is another way of stating that the determinant derived in this way cannot
be mapped to the permanent by a local unitary, though this is already
obvious as unitary transformations preserve the eigenvalues). There remains
the intriguing possibility that there is a non-unitary operator that can effect
the map, but this is not explored in the present work.
Similarly, it is not possible to map existing weights to their negatives in
order to map the determinant to the permanent. For the n=3 case, one could
reassign w_1,2→ -w̅_1,2 and w_2,1→ -w̅_2,1 to
remove the negative signs on all edges with these labels in
Fig. <ref>(a), but this still leaves signs on edges labeled
by w_1,1 which cannot be removed.
As in the bosonic case, it is worthwhile to express the action of the nth
power of M̃_f, alt on the vacuum state:
M̃_f^n| 0⟩ = (∑_j=0^n-1w_n-1,jf_j^†)
(∑_j=0^n-1w_n-2,jf_j^†)
× ⋯×(∑_j=0^n-1w_0,jf_j^†)
|𝒪⟩.
This simple and apparently separable representation for the output state, as
products of similar terms, is possible because of the Pauli principle and the
anticommutation relations: any attempted creation of a fermion in an
already-occupied site is zero, and the signs of the final many-fermion states
will reflect the number of permutations required to express them in normal
ordering. Furthermore, the result is clearly the determinant D_n (or its
negative) rather than the permanent. The states (<ref>) and
(<ref>) reveal a close connection between the determinant and
the permanent expressed in terms of indistinguishable particles.
Consider explicitly the n=3 case:
M̃_f^3| 0⟩ = (w_2,0f_0^†+w_2,1f_1^†
+w_2,2f_2^†)
× (w_1,0f_0^†+w_1,1f_1^†+w_1,2f_2^†)
× (w_0,0f_0^†+w_0,1f_1^†+w_0,2f_2^†)
|𝒪⟩.
= w_2,2f_2^†(w_1,1f_1^†w_0,0f_0^†+w_1,0f_0^†
w_0,1f_1^†)
+ w_2,1f_1^†(w_1,2f_2^†w_0,0f_0^†
+w_1,0f_0^†w_0,2f_2^†)
+ w_2,0f_0^†(w_1,1f_1^†w_0,2f_2^†
+w_1,2f_2^†w_0,1f_1^†)|𝒪⟩
= [w_2,2(w_1,1w_0,0-w_1,0w_0,1).
+ w_2,1(-w_1,2w_0,0+w_1,0w_0,2)
+ .w_2,0(-w_1,1w_0,2+w_1,2w_0,1)]
f_2^†f_1^†f_0^†|𝒪⟩
= D_3f_2^†f_1^†f_0^†|𝒪⟩.
Recapitulating the arguments of Sec. <ref> but for
M̃_f instead of M̃ or
M̆, one obtains that the only non-zero eigenvalues of M̃_f
are given by the (n+1)th roots of D_n.
§ ROW REDUCTIONS
The exponentially small rank of the matrices M̃ and M̆,
discussed in Section <ref>, suggests that it might be
possible to apply row reductions to reduce their dimension without
affecting the non-zero eigenvalues. Just as Gaussian elimination reduces a
matrix to upper (or lower) triangular form, so that the determinant (which
would otherwise require summing n! terms) can be evaluated by a product of
the diagonal elements, row reduction of M̃ or M̆ reduces the
n! paths of the algebraic branching program to a single path by deleting
vertices and reweighting edges. As shown below, row reductions in the fermionic
model correspond precisely to the Gaussian elimination approach to
evaluating the determinant. The bosonic version provides a roadmap for row
reductions to evaluate the permanent, but doesn't appear to provide a speedup
over the direct matrix multiplication method discussed in
Sec. <ref>.
As shown in Sec. <ref>, the blocks M̃_m of
M̃^n+1 and M̆^n have dimension
(n m) but are all unit rank, so that the ranks of
M̃^n+1 and M̆^n are n+1 and n, respectively. In
contrast, the matrices M̃ and M̆ are not block diagonal, and
their eigenvectors are no longer given by M̃^m| 0⟩ and
M̆^m| 0⟩, respectively. Consider M̆ for
concreteness. While the n non-zero eigenvalues correspond to the nth roots
of the permanent, the zero eigenvalues have multiplicity 2^n-1-n; thus, the
kernel of M̆ is comprised of generalized zero eigenvectors of rank 1
up to n-1. The set of linearly independent vectors spanning these defective
matrices must therefore be obtained sequentially. The standard procedure is to
obtain the set of r_m generalized zero rank-m vectors |v_i^(m)⟩,
1≤ i≤ r_m, such that M̆^m|v_i^(m)⟩
=|𝒪⟩. The rank-nullity theorem ensures that
n+∑_m=1^n-1r_m=2^n-1.
In practice there is a more efficient iterative procedure to obtain the kernel.
First generate the reduced row echelon form (also known as the pivot matrix)
B_1 for M̆, via Gauss-Jordan elimination. For any rank-deficient
matrix such as M̆, the deviation of B_1 from the identity is driven
entirely by the r_1 rank-1 zero eigenvectors; thus the
(2^n-1-r_1)× (2^n-1) matrix B_1 annihilates the null space:
B_1|v_i^(1)⟩=0. One can then find an
n×(2^n-1-r_1) matrix A_1 such that M̆=A_1B_1; its matrix
elements coincide with those of M̆ but with r_1 columns removed
whose indices correspond to the location of the first non-zero element of each
|v_i^(1)⟩. The (2^n-1-r_1)×(2^n-1-r_1) matrix
M̆^(2)≡ B_1A_1 therefore has the same eigenvalues as
M̆ but now with r_1 fewer zeroes.
The rank-2 generalized eigenvectors are the solutions of
M̆^2|v_i^(2)⟩=A_1B_1A_1B_1|v_i^(2)⟩
=|0⟩,
for 1≤ i≤ r_2, which can be rewritten as
A_1M̆^(2)(B_1|v_i^(2)⟩)=0.
At the same time, the zero eigenvectors of M̆^(2) are the solutions
of
M̆^(2)|ṽ_i^(2)⟩=0.
Thus, with the identification
|ṽ_i^(2)⟩≡ B_1|v_i^(2)⟩,
Eq. (<ref>) is automatically satisfied by Eq. (<ref>),
and solving the latter is more efficient than the former due to the smaller
matrix dimension. It is straightforward to verify that the non-zero eigenstates
of interest from Eq. (<ref>),
|ϕ_n^(1)(k)⟩=e^-i2π k/n∑_j=0^n-1e^i2π jk/n(M̆/P_n^1/n)^j|0^⊗ n⟩,
are transformed into
|ϕ_n^(2)(k)⟩ = e^-i2π k/n∑_j=0^n-1e^i2π jk/n(M̆^(2)/P_n^1/n)^j|0^⊗ n⟩
= B_1|ϕ_n^(1)(k)⟩.
The procedure is then repeated for M̆^(2)=A_2B_2. After n-1
iterations, the original rank-deficient (2^n-1)-dimensional matrix
M̆ is reduced to a full-rank n-dimensional matrix
M̆^(n-1) with eigenvectors
∏_i=1^n-1B_n-i|ϕ_n(k)⟩ and corresponding eigenvalues
P_n^1/n. As shown below, this procedure is equivalent to Gaussian
elimination of M for the evaluation of the determinant, and also provides an
equivalent systematic approach to the evaluation of the permanent.
§.§ Example: Three Fermions
Consider row reductions of M̃_f, alt,
Eq. (<ref>), for the specific case n=3, depicted in
Fig. <ref>). The (unnormalized) rank-1 generalized zero
eigenvectors can be written as
|v_1^(1)⟩ = |001⟩+w_1,1/w_1,2|010⟩
+w_1,0/w_1,2|100⟩;
|v_2^(1)⟩ = |011⟩-w_2,0/w_2,2|110⟩;
|v_3^(1)⟩ = |101⟩+w_2,1/w_2,2|110⟩,
so that one may eliminate vertices labeled by the bitstrings 001, 011, and
101. The matrix B_1 must satisfy B_1|v_i^(1)⟩
=0; a sufficient construction is
B_1 = I-|v_1^(1)⟩⟨ 001|-|v_2^(1)⟩⟨ 011|
-|v_3^(1)⟩⟨ 101|
= [ 1 0 0 0 0 0 0
0 -w_1,1/w_1,2 1 0 0 0 0
0 -w_1,0/w_1,2 0 0 1 0 0
0 0 0 w_2,0/w_2,2 0 -w_2,1/w_2,2 1 ],
where rows and columns indices are labeled by bitstrings with the least
significant bit on the right. Here, B_1 is expressed in the somewhat
unconventional lower-triangular reduced row echelon form. Likewise,
A_1 = M̆_f, alt\{⟨ 001|, ⟨ 011|,
⟨ 101|}
= [ 0 0 0 w_2,2
w_0,2 0 0 0
w_0,1 0 0 0
0 w_1,2 0 0
w_0,0 0 0 0
0 0 w_1,2 0
0 -w_1,0 w_1,1 0 ].
It is straightforward to verify that A_1B_1=M̆_f, alt. One then
obtains
M̃_f, alt^(2)=B_1A_1=[ 0 0 0 w_2,2
w_0,1' 0 0 0
w_0,0' 0 0 0
0 -w_1,0' w_1,1' 0 ],
where w_0,0', w_0,1', w_1,0', and w_1,1' coincide with the
reweighted terms in M derived from a first round of Gaussian elimination,
defined in Eqs. (<ref>) and (<ref>).
It is illuminating to view this first round as an operation on the graph
representing the binary branching process, as depicted in
Figs. <ref>(b) and (c). Vertices labeled by bitstrings
001, 011, and 101 are deleted, reducing the total number of branches from
six to two. The contributions to the determinant of the branches through the
deleted vertices are incorporated by reweighting remaining edges. For example,
the weight -w_1,2w_2,1 of the path from |100⟩ to |000⟩
through vertex |101⟩ is incorporated into the new path weight
w_1,1'w_2,2; similarly, the path from |010⟩ to |000⟩
through deleted vertex |011⟩ is incorporated in w_1,0'.
Perhaps surprisingly, these edge reweightings can also compensate for both of
the deleted paths from |001⟩ to |000⟩ through the two deleted
vertices |011⟩ and |101⟩. Crucially, for fermions, the total
path weights after the transformation are products of revised edge weights; as
discussed in Sec. <ref>, cancellation of signed terms ensure
that the total weights for the reduced branching process still coincide with
the determinant.
The remaining (unnormalized) rank-2 generalized zero eigenvector can now be
efficiently written as
|v^(2)⟩=|010⟩+w_1,0'/w_1,1'|100⟩,
so that one may eliminate the vertex labeled by the bitstring 010. The matrix
B_2 must satisfy B_2|v^(2)⟩=|𝒪⟩:
B_2 = I-|v^(2)⟩⟨ 010|
= [ 1 0 0 0
0 -w_1,0'/w_1,1' 1 0
0 0 0 1 ].
Likewise,
A_2 = M̆_f, alt^(2)\⟨ 010|
= [ 0 0 w_2,2
w_0,1' 0 0
w_0,0' 0 0
0 w_1,1' 0 ].
Again, it is straightforward to verify that A_2B_2
=M̆_f, alt^(2). One
then obtains
M̆_f, alt^(3)=B_2A_2=[ 0 0 w_2,2
w_0,0” 0 0
0 w_1,1' 0 ],
where w_0,0” coincides with Eq. (<ref>). The eigenvalues of
M̆^(3) are the cube roots of D_3=w_0,0”w_1,1'w_2,2.
In this example, the second and final round of Gauss-Jordan elimination
corresponds to deleting the vertex labeled by bitstring 010, as depicted in
Fig. <ref>(d), yielding only one path from |000⟩ to
|000⟩ and the rescaled weight w_0,0”. The product of the edge
weights for this path, w_0,0”w_1,1'w_2,2 is precisely the product of
diagonal terms of M in lower-triangular form, Eq. (<ref>).
It is instructive to write the consequences of row reduction for the fermionic
representation of the eigenstate, Eq. (<ref>), for the example
considered above. After the first round, the state becomes
M̃_f^3| 0⟩
= w_2,2f_2^†(w_1,0'f_0^†+w_1,1'f_1^†)
× (w_0,0'f_0^†+w_0,1'f_1^†)|𝒪⟩,
using the Pauli principle. After the second round, one obtains
M̃_f^3| 0⟩ = w_2,2f_2^†w_1,1'f_1^†
w_0,0”f_0^†|𝒪⟩
=D_3f_2^†f_1^†f_0^†|𝒪⟩.
Thus, for fermions, no explicit expansion of the state (<ref>)
is required; rather, the initial product of factors with three terms is reduced
to a product of factors with two terms, and finally a product of single terms.
The general strategy is the same for all n. This reduction of the evaluation
of the determinant to a product of n terms is at the heart of its efficiency.
§.§ Example: Three Bosons
Consider next row reductions for the bosonic case, again using n=3 as an
example to illustrate the procedure for general n. The procedure works in
much the same way as for fermions, and is depicted in
Fig. <ref>). The initial graph is equivalent to that for
the original spin model, and is shown in Fig. <ref>).
The (unnormalized) rank-1 generalized zero eigenvectors of M̆_b,
Eq. (<ref>), can be written as
|v_1^(1)⟩ = |011⟩-w_2,0/w_2,2|110⟩;
|v_2^(1)⟩ = |101⟩-w_2,1/w_2,2|110⟩.
Comparison with Eq. (<ref>), one notices the similarity with
|v_2^(1)⟩ and |v_3^(1)⟩ in the fermionic case, and also
that there is no rank-1 zero eigenvector involving h=1 states.
The vertices labeled by the bitstrings 011 and 101 can be eliminated
choosing the projector
B_1 = I-|v_1^(1)⟩⟨ 011|-|v_2^(1)⟩⟨ 101|
= [ 1 0 0 0 0 0 0
0 1 0 0 0 0 0
0 0 1 0 0 0 0
0 0 0 0 1 0 0
0 0 0 w_2,0/w_2,2 0 w_2,1/w_2,2 1 ],
and
A_1 = M̆_b\{⟨ 011|,⟨ 101|}
= [ 0 0 0 0 w_2,2
w_0,2 0 0 0 0
w_0,1 0 0 0 0
0 w_1,1 w_1,2 0 0
w_0,0 0 0 0 0
0 w_1,0 0 w_1,2 0
0 0 w_1,0 w_1,1 0 ].
Again, it is straightforward to verify that A_1B_1=M̆. One then obtains
M̃^(2)=B_1A_1=[ 0 0 0 0 w_2,2
w_0,2 0 0 0 0
w_0,1 0 0 0 0
w_0,0 0 0 0 0
0 x w_1,0' w_1,1' 0 ],
where w_1,0', and w_1,1' coincide with the expressions in
Eq. (<ref>) but with minus signs replaced with plus signs; and a new
term is introduced,
x=w_1,0w_2,1+w_1,1w_2,0/w_2,2.
The first round of row reductions, shown in Fig. <ref>)(a),
corresponds to deleting two vertices in the h=2 layer but none in the h=1
layer, in contrast with the fermionic case. Deleting vertices in only
a single layer avoids generating paths with rescaled weights on two adjacent
edges, which would yield unphysical cross terms in their product (c.f. the
discussion in Sec. <ref>). But this comes at a high cost:
vertices cannot be deleted in an adjacent layer simultaneously if they share
edges with vertices in the first layer, as is possible in the fermionic case.
This clearly decreases the efficiency of the reduction. Furthermore,
deleting vertices in one layer requires adding new edges from the remaining
vertex in that layer to all vertices in the next layer that had (now deleted)
edges; in this case, the additional edge has weight x.
The second and final round of row reductions in this example is governed by the
rank-2 generalized zero eigenvectors:
|v_1^(2)⟩ = |001⟩-x/w_1,1'|100⟩;
|v_2^(2)⟩ = |010⟩-w_1,0'/w_1,1'|100⟩.
As shown in Fig. <ref>)(b), the vertices labeled by the
bitstrings 001 and 010 are eliminated:
B_2 = I-|v_1^(2)⟩⟨ 001|-|v_2^(2)⟩⟨ 010|
= [ 1 0 0 0 0
0 x/w_1,1' w_1,0'/w_1,1' 1 0
0 0 0 0 1 ],
and
A_2 = M̆^(2)\{⟨ 001|,⟨ 010|}
= [ 0 0 w_2,2
w_0,2 0 0
w_0,1 0 0
w_0,0 0 0
0 w_1,1' 0 ].
One then obtains
M̃^(3)=B_2A_2=[ 0 0 w_2,2
w_0,0' 0 0
0 w_1,1' 0 ],
where
w_0,0'=w_0,0+w_0,1w_1,0'+w_0,2x/w_1,1'.
As is shown in Fig. <ref>(b), two vertices in the h=1 layer
are now deleted, requiring a rescaling of the w_0,0 weight, and one obtains
a single path from |000⟩ to |000⟩, as desired.
The permanent is then
P_3=w_0,0'w_1,1'w_2,2,
which is expressed as a product of three single terms, much like the expression
of the determinant in Eq. (<ref>).
§.§ Example: Four Bosons
Given the superficial similarities between row reductions for the bosonic and
fermionic cases when n=3, it is instructive to discuss the n=4 case to
gather a better understanding of why the permanent is nevertheless
exponentially more difficult to compute with this method.
The rank-1 generalized zero eigenvectors are
|v_1^(1)⟩ = |0011⟩-w_2,1/w_2,3|0110⟩
-w_2,0/w_2,2|1001⟩
+w_2,0w_2,1/w_2,2w_2,3
= (|01⟩-w_2,0/w_2,2|10⟩)_0,2(|01⟩-w_2,1/w_2,3|10⟩)_1,3;
|v_2^(1)⟩ = |0101⟩-w_2,2/w_2,3|0110⟩
-w_2,0/w_2,1|1001⟩
+w_2,0w_2,2/w_2,1w_2,3;
= (|01⟩-w_2,0/w_2,1|10⟩)_0,1(|01⟩-w_2,2/w_2,3|10⟩)_2,3;
|v_3^(1)⟩ = |0111⟩-w_3,0/w_3,3|1110⟩
= (|01⟩-w_3,0/w_3,3|10⟩)_0,3
|11⟩_1,2;
|v_4^(1)⟩ = |1011⟩-w_3,1/w_3,3|1110⟩
= (|01⟩-w_3,1/w_3,3|10⟩)_1,3
|11⟩_0,2;
|v_5^(1)⟩ = |1101⟩-w_3,2/w_3,3|1110⟩
= (|01⟩-w_3,2/w_3,3|10⟩)_2,3
|11⟩_0,1.
The eigenvectors can all be written in explicitly separable forms, where the
indices outside the parentheses denotes the label partitions; note that these
also match the second indices in the weight ratios. Evidently the
the generalized zero eigenvectors for n=3, Eqs. (<ref>) and
(<ref>), can be written in a similar product form. This is due to the
fact that the Hamiltonian (<ref>) itself can be written as a sum of
permutations of separable terms. For example, the terms in the n=4
Hamiltonian that map h=2 states to h=3 states can be expressed as
M̆_(h=2→ 3) = 1/2[()_0,1I_2,3
+I_0,1()_2,3+()_0,2I_1,3+I_0,2()_1,3
+ ()_0,3I_1,2+I_0,3()_1,2],
where
()_i,j = |11⟩_i,j(w_2,i⟨ 01|+w_2,j⟨ 10|
)_i,j,
I_i,j = (|01⟩⟨ 01|+|10⟩⟨ 10|)_i,j.
The `identity' operator is the sum of all idempotents with h=1, enumerated
by the 1/2 prefactor in Eq. (<ref>). Thus, M̆_(h=2→ 3)
has the form of Cartesian products of operators over all four-site bipartitions
restricted to states with specific Hamming weight. It is straightforward to
verify that the |v_1^(1)⟩ and |v_2^(1)⟩ in
Eq. (<ref>) are zero eigenvectors of the first and second Cartesian
product, respectively, and have no support on the third. Similar expressions
can be obtained for the other terms in the Hamiltonian.
Construction of A_1 and B_1 proceeds analogously to the n=3 case, and
generates M̆^(2)=B_1A_1 with basis states (graph vertices)
{|0011⟩,|0101⟩,|0111⟩,|1011⟩,|1101⟩} removed.
However, all but one of the 23 non-zero terms in the resulting matrix is
unique. From the graph perspective, only 5 edges are reweighted, 9 edges have
unchanged weights, and 9 new edges with unique weights must be added.
Generically, in the bosonic case, the number of unique terms that need to be
evaluated during the row reduction procedure grows exponentially with n.
There doesn't appear to be any way to exploit the separable nature of the
generalized zero eigenvectors to simplify the calculation.
§ DICUSSION: PROSPECTS FOR A QUANTUM ALGORITHM
As discussed in Sec. <ref>, the permanent of an n× n
matrix M relates to the eigenvalues of another 2^n-dimensional matrix
M̃ or M̆ (the dimension of the latter is one smaller so that
one basis state is unused). This matrix has several attributes that would
appear to favor the development of an efficient quantum algorithm for the
evaluation of the permanent: the dimension of M̃ is a power of two,
which would be the case for an n-qubit operator; matrix elements of
M̃ are easily indexed by address, which correspond to their original
positions in M; M̃ is n-sparse (no row or column has more than
n-1 elements); and the permanent is the maximal eigenvalue of M̃^n.
Despite these nice features, however, the construction of an efficient
algorithm for the permanent using this approach is not straightforward for one
principal reason: neither M̃ nor M̆ is Hermitian or unitary.
As a first attempt at a quantum algorithm, one might leverage the relation
P_n=⟨0|M̆^n|0|$⟩, Eq. (<ref>). The quantity on the
right-hand side can be computed using any of the known algorithms for
evaluating expectation values <cit.>. Unfortunately, such
algorithms haveO(1/ϵ)or worse dependence on additive
error <cit.>, and thus an even worse dependence on the multiplicative
error. Moreover, the operator norm ofM̆^nis not polynomially bounded
in general. Consequently, this approach fails to suggest an avenue toward an
efficient quantum algorithm.
A more lucrative approach might be to make use of the fact that all non-zero
eigenvalues ofM̆have absolute value|P_n|^1/n,
Eq. (<ref>). Thus, if there exists an efficient procedure to
generate one of the corresponding eigenstates, then|P_n|^1/ncan be
computed efficiently to constant or polynomially small additive error. Note
that an additive approximation of|P_n|^1/nprovides significantly more
resolution, at least for the unitary matricesMthat would be relevant to
boson sampling, in contrast to an additive approximation of|P_n|. As noted
by Aaronson and Arkhipov <cit.>, since|P_n|is typically
exponentially small
for unitary matrices sampled from a Haar random distribution, an additive
approximation of|P_n|to polynomial accuracy would almost always return
zero. On the other hand, the average of|P_n|^1/nis inΩ(1)for unitary
matrices sampled from a Haar random distribution, and therefore an
approximation of|P_n|^1/nto polynomially small or even constant additive
error provides more resolution.
Unfortunately, generating any eigenstates ofM̆,
Eq. (<ref>), is not straightforward. The coefficients in the
linear combination depend on the value ofP_n, which is unknown and in fact
the goal of the computation. Even if one could obtain a sufficiently good
approximation ofP_n(via randomized classical algorithms), taking
appropriate linear combinations using the techniques developed in
Ref. <cit.> would only generate the targeted eigenstate with
exponentially small probability, limiting the runtime of the algorithm.
Specifically, it is not obvious how to adapt block-encoding
techniques <cit.> to generate, with high probability, the stateM̆^n-1|0⟩/M̆^n-1|0⟩, which is one of the
terms in the desired linear combination.
Consider instead leveraging the useful property that the state|0⟩is an
equal superposition of all eigenstates corresponding to non-zero eigenvalues ofM̆. IfM̆were unitary, this fact would have been sufficient
for computing all eigenvalues ofM̆efficiently using repeated
application of the phase estimation algorithm <cit.>. Unfortunately,
the extension of phase estimation to non-unitary operators is generally
inefficient <cit.>. For the present problem, we expect phase estimation
to take an exponentially long time, as the eigenvalues being estimated lie well
inside the unit circle.
A more sophisticated approach to computing the eigenvalues ofM̆is
based on quantum linear-system solvers <cit.>. The complexity of this
approach is limited by the condition number of the eigenvectors, however. We
have verified numerically that the condition number is exponentially large for
typical real and complex matricesM.
To summarize, mapping the problem of computing the permanent ofMto
calculating the eigenvalues ofM̆would seem to suggest new routes for
designing an efficient quantum algorithm to obtain a multiplicative
approximation of the permanent. Yet, such an algorithm does not follow from the
immediate application of the currently available algorithmic tools for linear
algebra in a quantum setting. In all likelihood, if such an algorithm exists,
it would rely on more subtle properties of the permanent than are made apparent
by the present mapping.
§ BLOCK-DIAGONAL REPRESENTATION
This section derives the expression for M̃^n+1, where M̃ is
defined by Eq. (<ref>). Consider first
M̃^2:
M̃^2 = (∑_ i_0∑_j_0=0^n-1w_h( i_0),j_0σ^+_j_0| i_0⟩⟨ i_0|+| 0⟩⟨ 1|
)(∑_ i_1∑_j_1=0^n-1w_h( i_1),j_1σ^+_j_1| i_1⟩⟨ i_1|+| 0⟩⟨ 1|
)
= ∑_ i_0, i_1∑_j_0,j_1w_h( i_0),j_0σ^+_j_0
| i_0⟩⟨ i_0|w_h( i_1),j_1σ^+_j_1| i_1⟩⟨ i_1|
+∑_ i_1∑_j_1| 0⟩⟨ 1|w_h( i_1),j_1σ^+_j_1| i_1⟩⟨ i_1|
+∑_ i_0∑_j_0w_h( i_0),j_0σ^+_j_0
| i_0⟩⟨ i_0| 0⟩⟨ 1|.
For the evaluation of the ⟨ i_0|w_h( i_1),j_1σ^+_j_1| i_1⟩ factor in first term of Eq. (<ref>),
there are three
possibilities: σ^+_j_1| i_1⟩=0, σ^-_j_1| i_0
⟩=0, or σ^+_j_1| i_1⟩=| i_0⟩
(equivalently | i_1⟩=σ^-_j_1| i_0⟩ or
⟨ i_1|=⟨ i_0|σ^+_j_1), and the first two
possibilities contribute nothing to the sum. Similar arguments apply to the
second and third terms, and one obtains
M̃^2 = ∑_ i∑_j_0,j_1w_h( i),j_0w_h( i)-1,j_1σ^+_j_0| i⟩⟨ i|σ_j_1^+
+∑_j_0(w_n-1,j_0| 0⟩⟨ 1|σ_j_1^+
+w_0,j_0σ^+_j_0| 0⟩⟨ 1|).
Next consider M̃^3. After elementary algebra along the same lines as
above, one obtains
M̃^3 = ∑_ i∑_j_0,j_1,j_2(w_h( i),j_0σ^+_j_0)| i⟩⟨ i|
(w_h( i)-1,j_1σ_j_1^+)
(w_h( i)-2,j_2σ_j_2^+)
+∑_j_0,j_1(w_0,j_0σ^+_j_0)(
w_1,j_1σ^+_j_1)| 0⟩⟨ 1|
+ ∑_j_n-1,j_n-2| 0⟩⟨ 1|
(w_n-1,j_n-1σ_j_n-1^+)(w_n-2,j_n-2σ_j_n-2^+)
+∑_j_0,j_n-1(w_0,j_0σ^+_j_0)| 0⟩⟨ 1|(w_n-1,j_n-1σ_j_n-1^+).
The form of leading term in M̃^n+1 should now be evident:
∑_ i∑_j_0,…,j_n(w_h( i),j_0σ^+_j_0)| i⟩⟨ i|(w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-n,j_nσ_j_n^+).
In the above expression, the ⟨ i|∏_kσ_j_k^+ term is
zero unless i= 1, but then
σ^+_j_0| i⟩=0, so that the leading term vanishes. The
remaining terms are straightforward generalizations of those found in
Eqs. (<ref>) and (<ref>), and one obtains
M̃^n+1 = ∑_j_0,…,j_n-1[
(w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)| 0⟩⟨ 1|
+(w_0,j_0σ^+_j_0)⋯(w_n-2,j_n-2σ^+_j_n-2)| 0⟩⟨ 1|
(w_n-1,j_n-1σ^+_j_n-1).
+ …+.(w_0,j_0σ^+_j_0)| 0⟩⟨ 1|(w_1,j_1σ^+_j_1)⋯(w_n-1,j_n-1σ^+_j_n-1)+| 0⟩⟨ 1|(w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)
].
A corollary is that the expression for arbitrary powers p is
M̃^p = ∑_ i∑_j_0,…,j_p-1(w_h( i),j_0σ^+_j_0)| i⟩⟨ i|
(w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-p+1,j_p-1σ_j_p-1^+)
+ ∑_j_0,…,j_p-2[
(w_0,j_0σ^+_j_0)⋯(w_p-2,j_p-2σ^+_j_p-1)| 0⟩⟨ 1|
+(w_0,j_0σ^+_j_0)⋯(w_p-3,j_p-3σ^+_j_n-2)| 0⟩⟨ 1|
(w_p-2,j_p-2σ^+_j_p-2).
+ …+.
| 0⟩⟨ 1|(w_n-1,j_0σ^+_j_0)⋯(w_n-p+1,j_p-2σ^+_j_p-2)
],
which can be used to prove Eq. (<ref>), i.e. that
M̃_m=M̃^m| 0⟩⟨ 0|M̃^n-m+1.
First,
M̃^p| 0⟩=∑_ i∑_j_0,…,j_p-1(w_h( i),j_0σ^+_j_0)| i⟩⟨ i|
(w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-p+1,j_p-1σ_j_p-1^+)| 0⟩.
Only bitstrings i with Hamming weight p-1 will contribute, so
M̃^p| 0⟩=∑_j_0,…,j_p-1(w_0,j_0σ^+_j_0)
(w_1,j_1σ_j_1^+)⋯(w_p-1,j_p-1σ_j_p-1^+)| 0⟩.
Second, following similar reasoning,
⟨ 0|M̃^q = ∑_ i∑_j_0,…,j_q-1⟨ 0|w_h( i),j_0σ^+_j_0| i⟩⟨ i|
(w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-q+1,j_q-1σ_j_q-1^+)
+ ∑_j_0,…,j_q-2⟨ 1|(w_n-1,j_0σ^+_j_0)⋯(w_n-q+1,j_q-2σ^+_j_q-2)
= ∑_j_0,…,j_q-2⟨ 1|(w_n-1,j_0σ^+_j_0)⋯(w_n-q+1,j_q-2σ^+_j_q-2).
Putting these results together:
M̃^m| 0⟩⟨ 0|M̃^n-m+1 = ∑_j_0,…,j_m-1 k_0,…,k_n-m-1(w_0,j_0σ^+_j_0)
(w_1,j_1σ_j_1^+)⋯(w_m-1,j_m-1σ_j_m-1^+)| 0⟩
×⟨ 1|(w_n-1,k_0σ^+_k_0)
(w_n-2,k_1σ^+_k_1)⋯(w_m,k_n-m-1σ^+_k_n-m-1)
= ∑_j_0,…,j_n-1(w_0,j_0σ^+_j_0)
(w_1,j_1σ_j_1^+)⋯(w_m-1,j_m-1σ_j_m-1^+)| 0⟩
×⟨ 1|(w_n-1,j_mσ^+_j_m)
(w_n-2,j_m+1σ^+_j_m+1)⋯(w_m,j_n-1σ^+_j_n-1).
Comparison with the terms in Eq. (<ref>) immediately yields
M̃_m=M̃^m| 0⟩⟨ 0|M̃^n-m+1.
apsrev.bst |
http://arxiv.org/abs/2307.14348v1 | 20230708022041 | Solving the inverse potential problem in the parabolic equation by the deep neural networks method | [
"Mengmeng Zhang",
"Zhidong Zhang"
] | math.NA | [
"math.NA",
"cs.NA",
"math-ph",
"math.MP"
] |
1,2]Mengmeng [email protected]
3]Zhidong [email protected]
[1]School of Science, Hebei University of Technology, Tianjin 300401, China
[2]Nanjing Center for Applied Mathematics
Nanjing, 211135, China
[3]School of Mathematics (Zhuhai), Sun Yat-sen University, Zhuhai 519082, Guangdong, China
Solving the inverse potential problem in the parabolic equation by the deep neural networks method
[
=====================================================================================================
In this work, we consider an inverse potential problem in the parabolic equation, where the unknown potential is a space-dependent function and the used measurement is the final time data. The unknown potential in this inverse problem is parameterized by deep neural networks (DNNs) for the reconstruction scheme. First, the uniqueness of the inverse problem is proved under some regularities assumption on the input sources. Then we propose a new loss function with regularization terms depending on the derivatives of the residuals for partial differential equations (PDEs) and the measurements. These extra terms effectively induce higher regularity in solutions so that the ill-posedness of the inverse problem can be handled. Moreover, we establish the corresponding generalization error estimates rigorously. Our proofs exploit the conditional stability of the classical linear inverse source problems, and the mollification on the noisy measurement data which is set to reduce the perturbation errors. Finally, the numerical algorithm and some numerical results are provided.
AMS subject classifications: 34K28, 35R30, 65N15, 62M45.
Keywords: inverse potential problem, deep neural networks, uniqueness, generalization error estimates, numerical reconstruction.
§ INTRODUCTION.
§.§ Mathematical model.
The following parabolic system is considered in this work:
(∂_t -Δ +q(x))u =F(x,t), (x,t)∈Ω_T,
u(x,t) =b(x,t), (x,t)∈∂Ω_T,
u(x,0) =u_0(x), x∈Ω.
Here we write Ω_T=Ω×(0,T] and ∂Ω_T=∂Ω×(0,T] for short, and Ω⊂ℝ^d is an open bounded domain in ℝ^d with sufficiently smooth boundary. F(x,t), u_0(x), b(x,t) are the source term, initial status, boundary condition respectively, causing the heat propagation in the medium. The potential function q(x) ∈ L^∞(Ω), called the heat radiative coefficient of the material, is a crucial parameter for characterizing the heat conduction process. It describes the ability of the medium to propagate heat from internal sources or sinks. For known (F(x,t),u_0(x),b(x,t), q(x)) with suitable regularities, the forward problem (<ref>) is well-posed in appropriate function space <cit.>. In this work, we consider the inverse problem of recovering the unknown q(x), where the used measurement is the final time data
u(x,T):=φ(x), x∈Ω.
In practical applications of inverse problems, the contamination on inverse problems is unavoidable. So we will be given the noisy data φ^δ instead of the exact data φ(x) in (<ref>), which satisfies
φ^δ-φ_L^∞(Ω)≤δ.
To handle the effect caused by the perturbations, people need to develop effective methods to improve the accuracy and robustness in applications.
In this study, we choose the deep neural networks (DNNs) to solve the inverse problem (<ref>)-(<ref>). Comparing to traditional methods for solving inverse potential problem, this approach demonstrates the superiority in high-dimensional space and has the advantage of breaking the curse of dimensionality.
There are rare works on studying the inverse potential problem for parabolic equations using deep neural networks, especially the rigorous analysis of its convergence estimate. In this work, the authors will consider the solution of the inverse potential problem (<ref>)-(<ref>) parameterized by DNNs for the reconstruction scheme. We propose a new loss function with regularization terms depending on the derivatives of the residuals for PDEs and measurements. The mollification method has been employed to improve the regularity of the noisy data. Also, the generalization error estimates are rigorously derived from the conditional stability of the linear inverse source problem and the mollification error estimate on noisy data.
§.§ Literature.
The reconstructions of q(x) in (<ref>) from some inversion input data have been studied extensively. For zero initial status, the uniqueness for q(x) by
(<ref>)-(<ref>) is established in <cit.>, while the unique reconstruction using final measurement data is studied in <cit.>.
In the case of non-zero initial status, the existence and uniqueness of the generalized solution (u(x,t),q(x))∈ W_p^2,1(Ω_T) × L^p(Ω) with the time-average temperature measurement are given in <cit.> for (u_0,φ) with some regularities.
Choulli and Yamamoto <cit.> prove the generic well-posedness of the inverse problem in Hölder spaces by final measurement data, and then the conditional stability result in a Hilbert space setting for sufficiently small T is studied in <cit.>.
Chen et al <cit.> consider the inverse potential problem from a partial measurements over [T_0,T_1]×Ω with [T_0,T_1]⊂ [0,T], where the conditional stability estimates of the inverse problem in some Sobolev space and the reasonable convergence rates of the Tikhonov regularization are derived.
Recently, Jin et al <cit.> uses the same observational data and shows a weighted L^2 stability in the standard L^2 norm under a positivity condition. They provide an error analysis of reconstruction scheme based on the standard output least-squares formulation with Tikhonov regularization (by an H^1-seminorm penalty).
Zhang et al <cit.> prove the uniqueness of the identification from final time data for (sub)diffusion equation and show the conditional stability in Hilbert spaces under some suitable conditions on the problem data. The convergence and error analysis of the reconstruction discrete scheme are rigorously analyzed. The investigations in the inverse non-smooth potential problem are given in <cit.>, where the uniqueness for this nonlinear inverse problem is proved. Numerically, an iterative process called two-point gradient method is proposed by minimizing the data-fit term and the penalty term alternatively, with a convergence analysis in terms of the tangential condition.
There also exists some works involving multiple coefficient identification. For example, Yamamoto and Zou <cit.> investigate the simultaneous reconstruction of the initial temperature and heat radiative coefficient in a heat conductive system, with stability of the inverse problem and the reconstruction scheme. Kaltenbacher and Rundell <cit.> consider the inverse problem of simultaneously recovering two unknowns, spatially dependent conductivity and the potential function from overposed data consisting of u(x,T). The uniqueness result and the convergence of an iteration scheme are established. We also refer to
<cit.> and the references therein for the inverse potential problems in diffusion models from different types of observational data.
Recently, deep learning methods for solving PDEs have been realized as an effective approach, especially in high dimensional PDEs. Such methods have the advantage of breaking the curse of dimensionality. The basic idea is to use neural networks (nonlinear functions) to approximate the unknown solutions of PDEs by learning the parameters. For the forward problems, there exists many numerical works with deep neural networks involving the depth Ritz method (DRM) <cit.>, the depth Galerkin method (DGM) <cit.>, the DeepXDE method <cit.>, depth operator network method (DeepONet) <cit.>, physical information neural networks (PINNs) <cit.>, the weak adversary neural network (WAN) <cit.> and so on.
Theoretically, there are some rigorous analysis works investigating the convergence and error estimates for the solution of PDEs via neural networks, but the result are still far from complete. For example, the convergence rate of DRM with two layer networks and deep networks are studied in <cit.>; the convergence of PINNs is given in <cit.>. For the inverse problems, the PINNs frameworks can be employed to solve the so-called data assimilation or unique continuation problems, and rigorous estimates on the generalization error of PINNs are established in <cit.>.
Bao et al <cit.> develop the WAN to solve electrical impedance tomography (EIT) problem. In <cit.>, the authors study a classical linear inverse source problem using the final time data under the frameworks of neural networks, where a rigorous generalization error estimate is proposed with a novel loss function including the Sobolev norm of some residuals. For more specific inverse problems applied in engineering and science, we refer to <cit.>.
§.§ Outline.
The rest of this article is organized as follows. In Section <ref> we introduce the knowledge of neural networks and the setting of mollification. In Section <ref>, we introduce a conditional stability of the linear inverse source problem first. Then the uniqueness theorem (Theorem <ref>) of this inverse potential problem can be proved followed from the conditional stability.
In Section <ref>, a novel loss function with specific regularization terms is introduced. Then we prove the generalization error estimates of data-driven solution of inverse problems, which is stated in Theorem <ref>.
In Section <ref>, we propose the reconstruction algorithm and provide several experiments to show the validity of the proposed algorithm.
§ PRELIMINARIES.
§.§ Neural network architecture.
First we introduce the basic knowledge of neural network briefly. Note that u_θ and q_η are two separate networks with different variables (x,t) and x.
Thus, we use ξ to denote collectively the network parameters for a parametric function s_ξ(z)
such that a general scheme can be applied for either u_θ(x,t) (with z=(x,t), ξ=θ)
or q_η(x) (with z=x, ξ=η).
For a positive integer K∈ℕ, a K-layer feed-forward neural
network of s_ξ(z) for z∈ℝ^d_0 is a function s_ξ(z) defined by
s_ξ(z):=W_K l_K-1∘⋯∘ l_1(z)+b_K,
where the k-th layer l_k: ℝ^d_k-1→ℝ^d_k
is given by l_k(z)=σ(W_k z+b_k) with weights
W_k∈ℝ^d_k× d_k-1 and biases
b_k∈ℝ^d_k for k=2, ⋯, K. The activation function σ(·) includes sigmoid, tanh, ReLU (Rectified Linear Unit), softmax and so on <cit.>. These activation functions introduce non-linearities and enable the network to learn complex patterns and relationships in the data.
The neural network (<ref>) consists of an input layer with argument z, where d_0=d is the problem dimension (also known as the size of input layer), an output layer which has the weights W_K∈ℝ^d_K× d_K-1 and biases b_K∈ℝ^d_K, and K-1 hidden layers for some K∈ℕ. The network parameters of all layers are collectively denoted by
ξ:=(W_K, b_K, W_K-1, b_K-1, ⋯, W_1, b_1).
In Figure <ref>, we give a simple architectures
of fully connected neural networks, where z=(x_1,x_2,⋯, x_d) is d-dimensional input variables, and the neural networks function is given as s_ξ(z)=y_NN.
§.§ Mollification.
In the practical applications of inverse problems, the noise of the measurements is unavoidable. The noisy data will make the residuals uncontrollable, which can be seen in the next section. Hence, we choose to mollify the measured data beforehand. The next is the introduction of mollification.
Fix one function ρ∈ C^2(ℝ) as
suppρ=(0,1), ρ(0)=ρ(1)=ρ'(0)=ρ'(1)=0,
and
∫_0^∞ρ(t) t^d-1 dt=1/π_d,
with π_d is the surface area of unit sphere B(0,1) in R^d.
Set ρ_ϵ(x):=ϵ^-dρ(x/ϵ), and define the mollifier as
G_ϵψ=∫_|x-y|≤ϵρ_ϵ(|x-y|)ψ(y) dy.
Then we have
∫_ℝ^dρ_ϵ(|x-y|) dy=1.
In the next lemma, we concern with the estimate of Δφ-Δ G_ϵ(φ^δ).
Assume that the noisy data φ^δ∈ L^∞(Ω) and the exact data u(x,T):=φ(x)∈ H^2(Ω) satisfy
φ-φ^δ_L^∞(Ω)≤δ.
Also, the exact data imposes the high-order Lipschitz continuous condition. More precisely, we can find a positive constant C_φ such that
|φ(x)-φ(y)| ≤ C_φ|y-x|,
|Δφ(x)-Δφ(y)| ≤ C_φ|y-x|,
for x,y∈Ω uniformly. For the mollification operator (<ref>), if we pick ϵ=O(δ^1/3), then we can achieve the following optimal error bound
Δφ-Δ G_ϵ( φ^δ)_L^∞(Ω)≤ Cδ^1/3.
We split the subtraction Δφ-Δ G_ϵ( φ^δ) as following:
Δφ-Δ G_ϵ( φ^δ)
= (Δφ-G_ϵ(Δφ))+(G_ϵ(Δφ)-Δ G_ϵ( φ^δ))=:I_1+I_2.
For I_1, we have that
|I_1|≤∫_|x-y|≤ϵρ_ϵ(|x-y|) |Δφ(x)-Δφ(y)| dy
≤ Cϵ.
For I_2, Green's identities and the properties of the kernel function ρ give that
Δ G_ϵ( φ)=G_ϵ( Δφ). Hence,
I_2 =Δ[ ∫_R^dρ_ϵ(|x-y|) (φ(y)-φ^δ(y)) dy]
=∫_R^dΔρ_ϵ(|x-y|) (φ(y)-φ^δ(y)) dy.
From the straightforward calculation, we can deduce that
Δρ_ϵ(|x-y|)=ϵ^-d-2ρ”(|x-y|/ϵ)+(d-1)ϵ^-d-1|x-y|^-1ρ'(|x-y|/ϵ),
which gives
|I_2|≤δ∫_|x-y|≤ϵ|Δρ_ϵ(|x-y|)| dy
≤ C δϵ^-2.
So we have
| Δφ-Δ G_ϵ(φ^δ)|≤ Cϵ(1+δϵ^-3).
By picking ϵ=O(δ^1/3), we can achieve the desired estimate and complete the proof.
§ UNIQUENESS.
The uniqueness of this inverse potential problem is one of our main results. In this section, we will prove the uniqueness and the proof relies on the conditional stability of the inverse source problem of equation (<ref>). The conditional stability will be stated in the next subsection.
§.§ Conditional stability of the inverse source problem.
Under the framework of DNNs, the total error of the reconstructed solution depends on the training error and the measurement error. This connection relies on the conditional stability of linear inverse source problem, i.e., the quantitative dependence of the unknown source on the measurement data. Sequentially, here we will introduce some known results for the linear inverse source problem.
The mathematical statement of inverse source problem in parabolic equations with final time data is given below. For the parabolic equation
(∂_t-Δ +q(x))v(x,t) =p(x)h(x,t), (x,t)∈Ω_T,
v(x,t) =0, (x,t)∈∂Ω_T,
v(x,0) =0, x∈Ω,
we set q≥ 0 and q∈ L^∞ (Ω), and h(x,t) is given.
Then the inverse source problem is to use the measurement
φ(x):=v[p](x,T)
to recover the unknown p(x) in the source term.
Recalling the norm of the classical Sobolev space W^2,1_2(Ω_T) as
u_W^2,1_2(Ω_T)=√(∑_|α|≤ 2D^αu^2_L^2(Ω_T)+u_t^2_L^2(Ω_T)),
the following classical result on the inverse source problem (<ref>)-(<ref>) can be found in <cit.>.
For equation (<ref>), we assume that
h∈ L^∞(Ω_T), h_t∈ L^∞(Ω_T), p(x)∈ L^2(Ω), ph∈ L^2(Ω_T), ph_t ∈ L^2(Ω_T),
and
h(x,t)≥ 0, h_t(x,t)≥ 0 on Ω_T, |h(x,T)|≥ν >0 on Ω.
Here ν is a fixed positive number. Then, for known q∈ L^∞(Ω) and input data φ∈ H^2(Ω), there exists a unique solution (v(x,t), p(x)) ∈ W^2,1_2(Ω_T)× L^2(Ω) to (<ref>)-(<ref>),
following the estimate
p_L^2(Ω)+v_W^2,1_2(Ω_T)≤ C (-Δ+q)φ_L^2(Ω).
The constant C depends on q_L^∞(Ω), ν, Ω and T.
§.§ Uniqueness theorem.
Now it is time to show the uniqueness theorem. First we introduce the admissible set for the unknown potential q(x) as
𝒜:={ψ∈ L^∞(Ω): 0≤ψ(x)≤ M a.e. on Ω}⊂ L^2(Ω).
The constant M is the given upper bound of the admissible set. Next, recalling equation (<ref>), we collect some restrictions on the controllable source F(x,t), initial status u_0(x) and boundary condition b(x,t).
The assumptions on F(x,t), u_0(x) and b(x,t) are given as follows.
* u_0(x)∈ H^2(Ω), u_0(x)=b(x,0) on ∂Ω, ∃ν>0 such that u_0(x)≥ν >0 on Ω;
* b∈ H^2(∂Ω), b≥ν>0 on ∂Ω, b_t ≥ 0 on ∂Ω;
* F∈ L^2(Ω_T), F_t∈ L^2(Ω_T), F≥ 0 on Ω_T, F_t≥ 0 on Ω_T;
* Δ u_0(x)-Mu_0(x)+F(x,0)≥ 0 on Ω.
Under Assumption <ref>, the inverse problem (<ref>)-(<ref>) has at most one solution in W^2,1_2(Ω_T)×𝒜.
Assume that there are two distinct pairs (u[q_1], q_1) and (u[q_2], q_2) satisfying (<ref>)-(<ref>) with same data
u[q_1](x,T)=u[q_2](x,T)=φ(x).
Setting
w(x,t):=u[q_1](x,t)-u[q_2](x,t), q(x):=q_2(x)-q_1(x),
then w(x,t) meets the system
(∂_t-Δ +q_1(x))w(x,t) =q(x)u[q_2](x,t), (x,t)∈Ω_T,
w(x,t) =0, (x,t)∈∂Ω_T,
w(x,0) =0, x∈Ω,
with
w(x,T) = 0, x∈Ω.
We need to prove that
(w(x,t), q(x))=(0,0) in W^2,1_2(Ω_T)× L^∞ (Ω).
Obviously q(x)∈ L^2(Ω). Also there holds u[q_2]∈ L^∞(Ω_T) and
u_t[q_2]∈ L^∞(Ω_T) by <cit.>. Then we have
q u[q_2]∈ L^2(Ω_T), q u_t[q_2]∈ L^2(Ω_T).
Under Assumption <ref> and the maximum principle, we can see that u[q_2]≥ 0 on Ω_T. For u_t[q_2], with Assumption <ref> and equation (<ref>), it satisfies
(∂_t -Δ +q_2(x)) (u_t[q_2]) =F_t(x,t)≥0, (x,t)∈Ω_T,
u_t[q_2](x,t) =b_t(x,t)≥ 0, (x,t)∈∂Ω_T,
u_t[q_2](x,0) =Δ u_0(x)-q_2u_0(x)+F(x,0)≥ 0, x∈Ω.
Then the maximum principle leads to u_t[q_2]≥ 0 straightforwardly. With the positivity of u_t[q_2], we derive that
u[q_2](x,t)=u_0(x)+∫_0^t ∂_s u[q_2](x,s) ds ≥ u[q_2](x,0)≥ν >0, (x,t)∈Ω_T,
which yields u[q_2](x,T)≥ν>0.
Now the conditions of Lemma <ref> are satisfied, and we conclude (w(x,t),q(x))=(0,0) by applying Lemma <ref> on (<ref>)-(<ref>). The proof is complete.
§ GENERALIZATION ERROR ESTIMATES.
In this section, we will discuss the error estimate of our approach for the inverse potential problem. Firstly, we introduce the corresponding residuals and define the loss function.
§.§ Loss function and training errors.
We propose a formulation of loss function for data-driven solutions of inverse problems, which can ensure the accuracy with the conditional stability of the given linear inverse source problem. To achieve it, we define suitable residuals that measure the errors of the governed system and the input data.
Assume that the activation function is of C^2 regularity for the neural network u_θ defined by (<ref>), which leads to u_θ∈ H^2(Ω× [0, T]). For the network parameters
θ∈Θ:={(W_k, b_k)}_k=1^K :W_k∈ℝ^d_k× d_k-1, b_k∈ℝ^d_k},
the set of all possible trainable parameters u_θ(x,t) up to its second order weak derivatives are bounded in Ω× [0,T] for any specific θ. Similarly, noticing that q_η(x) is the parametric neural network to approximate the potential function q(x), we assume the activation function for the neural network q_η(x) is of L^∞ regularity such that q_η(x)∈ L^∞(Ω). We define
* Interior PDE residual
ℛ_int,θ,η(x, t):=∂_t u_θ(x, t)-Δ u_θ(x, t)+ q_η(x)u_θ(x, t)-F(x,t), (x,t) ∈Ω_T.
* Spatial boundary residual
ℛ_sb,θ(x, t):=u_θ(x, t)-b(x,t), (x,t) ∈∂Ω_T.
* Initial status residual
ℛ_tb, θ(x):=u_θ(x, 0)- u_0(x), x ∈Ω.
* Data residual
ℛ_d, θ(x):=u_θ(x, T)- G_ϵφ^δ(x), x ∈Ω.
Note that in the data residual (<ref>), we use the mollified data G_ϵφ^δ(x) instead of the noisy data φ^δ(x). A loss function minimization scheme for data-driven inverse problems seeks to minimize these residuals comprehensively with some weights balancing different residuals. The loss function is defined as follows:
J_λ(θ,η)
= q_ηℛ_d,θ^2_L^2(Ω) +Δℛ_d,θ^2_L^2(Ω)
+ λℛ_int,θ,η^2_H^1(0,T;L^2(Ω))
+ℛ_tb,θ^2_L^2(Ω)
+q_ηℛ_tb,θ^2_L^2(Ω)
+Δℛ_tb,θ^2_L^2(Ω)+ℛ_sb,θ^2_H^2(0,T;L^2(∂Ω)),
where λ is a hyper-parameter to balance the residuals between the knowledge of PDE and the measurements. The proposed loss function (<ref>) includes derivative penalties on the residuals. This is motivated by the conditional stability result for linear inverse source problem, which requires higher regularity on the measurement data u(·,T) (see Lemma <ref>). To improve the regularity of the noisy measurement data, we employ the mollification method by applying the mollification operator G_ε on the noisy data φ^δ. The design of the loss function for inverse problems distinguishes itself from that for forward problems such as physics-informed neural networks. The smoothness requirements not only ensure the existence of forward problem solutions, but also ensure the well-posedness of the inverse problem within the optimization framework.
The following standard loss function
J^s(θ,η)
= ℛ_d,θ_L^2(Ω)^2
+λℛ_int,θ,η_L^2(Ω_T)^2+
ℛ_tb,θ_L^2(Ω)^2
+ℛ_sb,θ_L^2(∂Ω_T)^2
has often been used in the literature.
For example, the DGM workflow adopts this form of loss function and minimizes it by least squares scheme <cit.>.
To determine (θ,η) from the discrete training set, accurate numerical evaluation of the integrals in (<ref>) is essential. We introduce the following training sets that facilitate efficient computation of the integrals, leading to better performance:
𝒮_d :={(x_n,T): x_n∈Ω, n=1,2,⋯,N_d},
𝒮_int :={(x_n,t_n): (x_n,t_n)∈Ω_T, n=1,2,⋯,N_int},
𝒮_tb :={(x_n,0): x_n∈Ω, n=1,2,⋯,N_tb},
𝒮_sb :={(x_n,t_n): (x_n,t_n)∈∂Ω_T, n=1,2,⋯,N_sb}.
Applying these sets and the numerical quadrature rules <cit.>, we get the following empirical loss function
J_λ^N(θ,η)
=∑_n=1^N_dω_n^d,0|q_η(x_n)ℛ_d,θ(x_n)|^2+
∑_n=1^N_dω_n^d,1|Δℛ_d,θ(x_n)|^2
+λ∑_n=1^N_intω_n^int,0|ℛ_int,θ,η(x_n, t_n)|^2
+λ∑_n=1^N_intω_n^int,1|∂_tℛ_int,θ,η(x_n, t_n)|^2
+∑_n=1^N_tbω_n^tb,0|ℛ_tb,θ(x_n)|^2
+ ∑_n=1^N_tbω_n^tb,1|q_η(x_n)ℛ_tb,θ(x_n)|^2
+∑_n=1^N_tbω_n^tb,2|Δℛ_tb,θ(x_n)|^2
+∑_n=1^N_sbω_n^sb,0|ℛ_sb,θ(x_n, t_n)|^2
+∑_n=1^N_sbω_n^sb,1|∂_tℛ_sb,θ(x_n, t_n)|^2 ∑_n=1^N_sbω_n^sb,2|∂_t^2ℛ_sb,θ(x_n, t_n)|^2,
where the coefficients
ω^d,k_n, ω^int,k_n, ω^tb,j_n, ω^sb,j_n, k=0,1, j=0,1,2
are the quadrature weights. It is easy to see that the error for the loss function is
|J_λ(θ,η)-J_λ^N(θ,η)|
≤ Cmin{ N_d^-α_d,k,N_int^-α_int,k,N_tb^-α_tb,j,
N_sb^-α_sb,j:k=0,1, j=0,1,2},
where C depends on the continuous norm ·_C(Ω) of the integrals, the rate α^d,k, α^int,k, α^tb,j, α^sb,j (k=0,1, j=0,1,2) are positive and depend on the regularity of the underlying integrand i.e, on the space C(Ω). Therefore, the underlying solutions and neural networks should be sufficiently regular such that the residuals can be approximated to a high accuracy by the quadrature rule.
Now, we define the generalization errors as
ℰ_G,q:=q-q^*_L^2(Ω),
ℰ_G,u:=u-u^*_C([0, T] ;
L^2(Ω)),
where u^*:=u_θ^*, q^*:=q_η^* with (θ^*,η^*) is the minimizer of the functional (<ref>). Also, we estimate generalization errors in terms of the following training errors:
* The measurement data training errors:
ℰ_T,d:=ℰ_T,d,0+ℰ_T,d,1, where
ℰ_T,d,0:=(∑_n=1^N_dω_j^d,0|
q_η(x_n)ℛ_d,θ^*(x_n)|^2)^1/2,
ℰ_T,d,1:=(∑_n=1^N_dω_j^d,1|Δℛ_d,θ^*(x_n)|^2)^1/2.
* The interior PDE training errors: ℰ_T,int:=ℰ_T,int,0+ℰ_T,int,1, where
ℰ_T,int,0:=(∑_n=1^N_intω_n^int,0|ℛ_int,θ^*,η^*(x_n, t_n)|^2)^1/2,
ℰ_T,int,1:=(∑_n=1^N_intω_n^int,1|∂_tℛ_int,θ^*,η^*(x_n, t_n)|^2)^1/2.
* The initial condition training errors: ℰ_T,tb:=ℰ_T,tb,0+ℰ_T,tb,1+ℰ_T,tb,2, where
ℰ_T,tb,0
:=(∑_n=1^N_tbω_n^tb,0|ℛ_tb,θ^*(x_n)|^2)^1/2,
ℰ_T,tb,1
:=(∑_n=1^N_tbω_n^tb,1|q_η(x_n)ℛ_tb,θ^*(x_n)|^2)^1/2,
ℰ_T,tb,2
:=(∑_n=1^N_tbω_n^tb,2|Δℛ_tb,θ^*(x_n)|^2)^1/2.
* The spatial boundary condition training errors: ℰ_T,sb:=ℰ_T,sb,0+ℰ_T,sb,1+ℰ_T,sb,2, where
ℰ_T,sb,0
:=(∑_n=1^N_sbω_n^sb,0|ℛ_sb,θ^*(x_n, t_n)|^2)^1/2,
ℰ_T,sb,1
:=(∑_n=1^N_sbω_n^sb,1|∂_tℛ_sb,θ^*(x_n, t_n)|^2)^1/2,
ℰ_T,sb,2
:=(∑_n=1^N_sbω_n^sb,2|∂_t^2ℛ_sb,θ^*(x_n, t_n)|^2)^1/2.
§.§ Proofs of the estimates.
Now we can state the theorem about the generalization error estimates.
Recall the errors defined in (<ref>)-(<ref>). Under Assumption <ref>, there exists a unique solution to the inverse problem (<ref>)-(<ref>). Moreover, for the approximate solution (u^*,q^*) of the inverse problem with (θ^*,η^*) being a global minimizer of the loss function J_λ^N(θ,η), we have the following generalization error estimates
ℰ_G,q ≤ C( ℰ_T,d+ ℰ_T,int
+ ℰ_T,sb,1
+ ℰ_T,sb,2
+ ℰ_T,tb,1
+ ℰ_T,tb,2
+C_q^1/2 N^-α/2
+O(δ^1/3)),
ℰ_G,u ≤
C( ℰ_T,d+ ℰ_T,int
+ ℰ_T,sb
+ ℰ_T,tb
+C_q^1/2 N^-α/2
+O(δ^1/3)),
where
N =min{N_d, N_int,N_sb,N_tb},
α =min{α_int,0,α_int,1,α_sb,0,α_sb,1,α_sb,2,α_tb,0,α_tb,1,α_d},
in (<ref>), and
C_q=max{C_q,0,C_q,1,
C_qs,0,C_qs,1,C_qs,2,C_qt,0,C_qt,1, C_qd},
with
C_qd=C_qd(ℒ^*ℛ_d,θ^*_C(Ω)),
C_q,0=C_q,0(ℛ_int,θ^*,η^*_C(Ω_T)),
C_q,1=C_q,1(∂_tℛ_int,θ^*,η^*_C(Ω_T)),
C_qs,0=C_qs,0(ℛ_sb,θ^*_C(∂Ω_T)),
C_qs,1=C_qs,1(∂_tℛ_sb,θ^*_C(∂Ω_T)), C_qs,2=C_qs,2(∂_t^2ℛ_sb,θ^*_C(∂Ω_T)),
C_qt,0=C_qt,0(ℛ_tb,θ^*_C(Ω)), C_qt,1=C_qt,1(ℒ^*ℛ_tb,θ^*_C(Ω)).
The constant C depends on q^*_L^∞(Ω), Ω and T.
First, we introduce û:=u^*-u and realize that
(∂_t -Δ +q^*(x))û(x,t) =ℛ_int,θ^*,η^*(x, t)+(q-q^*)u[q](x,t), (x,t)∈Ω_T,
û(x,t) =ℛ_sb,θ^*(x,t), (x,t)∈∂Ω_T,
û(x,0) =ℛ_tb,θ^*(x), x∈Ω,
with the final condition
û(x,T)=u[q^*](x,T)-u[q](x,T)=ℛ_d,θ^*(x)-(φ-G_ϵφ^δ).
We make the decomposition û:=û_1+û_2, where û_1, û_2 satisfy
(∂_t -Δ +q^*(x))û_1(x,t) =(q^*-q)(x)u[q](x,t), (x,t)∈Ω_T,
û_1(x,t) =0, (x,t)∈∂Ω_T,
û_1(x,0) =0, x∈Ω,
with
û_1(x,T)=ℛ_d,θ^*(x)-(φ(x)-G_ϵφ^δ(x))-û_2(x,T),
and
(∂_t -Δ + q^*(x))û_2(x,t) =ℛ_int,θ^*,η^*(x, t), (x,t)∈Ω_T,
û_2(x,t) =ℛ_sb,θ^*(x,t), (x,t)∈∂Ω_T,
û_2(x,0) =ℛ_tb,θ^*(x), x∈Ω,
respectively.
Define the operator ℒ^* as ℒ^* ψ= (-Δ+q^*)ψ.
With Assumption <ref>, we can apply Lemma <ref> to (<ref>)-(<ref>) and deduce that
q^*-q_L^2(Ω)
≤ Cℒ^*û_1(·,T)_L^2(Ω)
= Cℒ^* ℛ_d,θ^*-ℒ^*(φ-G_ϵφ^δ)-ℒ^*û_2(·,T)_L^2(Ω)
= Cℒ^* ℛ_d,θ^*+Δφ-Δ G_ϵφ^δ-q^*(φ-G_ϵφ^δ)-ℒ^*û_2(·,T)_L^2(Ω)
≤ C (ℒ^*ℛ_d,θ^*_L^2(Ω)+Δφ-Δ G_ϵφ^δ_L^2(Ω)+q^*(φ-G_ϵφ^δ)_L^2(Ω)+ ℒ^*û_2(·,T)_L^2(Ω)),
with C=C(q^*_L^∞(Ω),Ω,T). Using Lemma <ref>, we get
Δφ-Δ G_ϵφ^δ_L^2(Ω)
≤ Cϵ(1+δϵ^-3).
Also, we have
|(φ-G_ϵφ^δ)|
≤∫_|x-y|≤ϵρ_ϵ(|x-y|) |φ(x)-φ^δ(y)| dy
≤∫_|x-y|≤ϵρ_ϵ(|x-y|) |φ(x)-φ(y)|dy + ∫_|x-y|≤ϵρ_ϵ(|x-y|) |φ(y)-φ^δ(y)| dy
≤ Cϵ + δ .
Thus, there holds
q^*(φ-G_ϵφ^δ)_L^2(Ω)≤ Cϵ + δ.
By straightforward computations, we have
ℒ^*û_2(·,T)_L^2(Ω) ≤∂_tû_2(·,T)_L^2(Ω)+
ℛ_int,θ^*,η^*(·, T)_L^2(Ω)
≤∂_tû_2_L^∞(0,T;L^2(Ω))+
ℛ_int,θ^*,η^*(·, T)_L^2(Ω).
Setting w(x,t):=∂_t û_2(x,t), it satisfies
(∂_t -Δ +q^*)w(x,t) =∂_tℛ_int,θ^*,η^*(x, t), (x,t)∈Ω_T,
w(x,t) =∂_t ℛ_sb,θ^*(x,t), (x,t)∈∂Ω_T,
w(x,0) =ℛ_int,θ^*,η^*(x, 0)-ℒ^*ℛ_tb,θ^*(x), x∈Ω.
Using the regularity theory for the direct problem (<ref>), we obtain
∂_tû_2_L^∞(0,T;L^2(Ω)) =w_L^∞ (0,T;L^2(Ω))
≤ C( ∂_tℛ_int,θ^*,η^*_L^2(Ω_T)
+∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω))
+ℛ_int,θ^*,η^*(·,0)_L^2(Ω)
+ℒ^*ℛ_tb,θ^*_L^2(Ω)).
Combining (<ref>)-(<ref>) together and using the Sobolev embedding theorem, we get
ℰ_G,q =q^*-q_L^2(Ω)
≤ C(ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω))
+ℒ^*ℛ_d,θ^*_L^2(Ω)+ℒ^*ℛ_tb,θ^*_L^2(Ω)
+∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω))
+ϵ(1+δϵ^-3)+ϵ+δ)
≤ C
(ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω))
+q^*ℛ_d,θ^*_L^2(Ω)
+Δℛ_d,θ^*_L^2(Ω)+q^*ℛ_tb,θ^*_L^2(Ω)
+Δℛ_tb,θ^*_L^2(Ω)
+∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω))
+ϵ(1+δϵ^-3)+ϵ+δ),
with C=C(q^*_L^∞(Ω),Ω,T)>0. Picking ϵ=O(δ^1/3), we achieve the estimate
ℰ_G,q =q^*-q_L^2(Ω)
≤ C
( ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω))
+q^*ℛ_d,θ^*_L^2(Ω)
+Δℛ_d,θ^*_L^2(Ω)
+q^*ℛ_tb,θ^*_L^2(Ω)
+Δℛ_tb,θ^*_L^2(Ω)
+∂_tℛ_sb,θ^*_H^1(0,T;L^2(∂Ω))
+O(δ^1/3)).
Finally, we will evaluate the generalization error for the unknown u(x,t) employing the obtained generalization error (<ref>) of the potential function. From the classical regularity theory for PDE, if F(x,t) is sufficiently smooth, then it holds that u∈L^2(0,T;L^∞(Ω)). Consequently,
ℰ_G,u =û_C([0,T];L^2(Ω))
≤ C(ℛ_int,θ^*,η^*+(q^*-q)u[q]_L^2(Ω_T)+ℛ_sb,θ^*_H^1(0,T;L^2(∂Ω))
+ ℛ_tb,θ^*_L^2(Ω))
≤
C(ℛ_int,θ^*,η^*_L^2(Ω_T)
+q^*-q_L^2(Ω) u_L^2(0,T;L^∞(Ω))+ℛ_sb,θ^*_H^1(0,T;L^2(∂Ω))
+ℛ_tb,θ^*_L^2(Ω))
≤ C
(ℛ_int,θ^*,η^*_H^1(0,T;L^2(Ω))
+q^*ℛ_d,θ^*_L^2(Ω)
+Δℛ_d,θ^*_L^2(Ω)+ℛ_tb,θ^*_L^2(Ω)
+q^*ℛ_tb,θ^*_L^2(Ω)+
Δℛ_tb,θ^*_L^2(Ω)+ℛ_sb,θ^*_H^2(0,T;L^2(∂Ω))+O(δ^1/3)).
The proof is complete.
The estimate (<ref>) demonstrates that well-trained neural networks will produce small generalization errors for the inverse problem. Specifically, when all components of the training errors, including the interior PDE errors, measurement errors as well as the initial and boundary value ones, are sufficiently small, and the training sampling is large enough, the generalization errors for inverse problem using neural networks can be limited well.
This differs from classical stability results that rely solely on the knowledge of data. In this work, the generalization error estimates reflect stability due to both the model itself and the reconstruction algorithm. From Theorem <ref>, we see that we can limit the errors of both the inverse and forward problems by controlling the residuals and the mollified parameter. This provides important insights into the mathematical properties of our approach and plays an important role on the construction of algorithms.
§ NUMERICAL RECONSTRUCTIONS.
§.§ Reconstruction algorithm.
The neural networks u_θ(x,t) and q_η(x) depend on the parameters θ and η
describing the networks information for specific activation functions.
Within the standard paradigm of deep learning,
one trains the networks by finding the optimal parameters (θ^*,η^*)
such that the loss function (<ref>) is minimized.
Our target is the unknown solution of the inverse problem
(<ref>)-(<ref>) and we wish to find the
trainable parameters (θ^*, η^*) such that the corresponding neural networks
(u_θ^*, q_η^*) approximate (u,q) well.
More precisely, to solve (<ref>)-(<ref>)
we first parameterize u and q by deep neural networks u_θ and q_η
with network parameters (θ,η) respectively.
Then, we design an appropriate loss function, which is minimized to determine the parameters (θ,η). Finally, a gradient-based method is applied to alternately update the network parameters so that (u_θ, q_η) gradually approximates (u,q) for our inverse problem.
We provide a schematic of the neural networks in Figure <ref>.
The left part visualizes two unknowns as two standard neural networks parameterized by θ and η, respectively. The right part applies the given physical laws to the networks. B.C., I.C. and D are the boundary condition, initial status and the measurement data obtained from random sample points in training sets 𝒮_sb, 𝒮_tb and 𝒮_d respectively. The training points in 𝒮_int are randomly sampled as the PDE residuals points in interior spatio-temporal domain. The loss function with some Sobolev norm is computed on the sample points, which can be done efficiently through automatic differentiation (AD) in case of derivative information. Minimizing the loss with respect to the parameters (θ, η) alternately produces u_θ^∗ and q_η^∗, which serves as the approximation to the solution of the inverse problem.
With the support of Theorem <ref>, we can construct the proposed Algorithm <ref> for solving the inverse problem (<ref>)-(<ref>).
The above minimization problem is to search a minimizer of a possibly non-convex function
J_λ^N(θ,η) over Θ⊂ℝ^ℳ for possibly very large ℳ.
The hyper-parameters (τ_η,τ_θ) are learning rates and (λ_η,λ_θ)
are balance hyper-parameters between PDE and measurement data residuals.
The robust analysis for hyper-parameters λ and the architectures of neural networks are studied in the next subsection.
The optimizer in Algorithm <ref> is Adam (Adaptive Moment Estimation), which is an optimization algorithm commonly used in deep learning for training neural networks. The key idea of Adam is to adaptively adjust the learning rate for each parameter based on estimates of both the first-order moment (the mean) and the second-order moment (the uncentered variance) of the gradients. This adaptation helps Adam to perform well in different types of optimization problems. The algorithm maintains an exponentially moving average of gradients (m_t) and squared gradients (V_t) for each parameter. At each iteration, Adam updates the parameters using a combination of these moving average estimates. It incorporates bias correction to account for the fact that the estimates are biased towards zero at the beginning of training.
We set g_t be gradients w.r.t. stochastic objective at timestep t, β_1, β_2∈[0,1) be the exponential decay rates for the moment estimates and τ be the initial learning rate. Good default settings for the tested machine learning problems are τ=0.001, β_1=0.9, β_2=0.999 and fuzzy factor ϵ=10^-8.
The updates are calculated as follows:
(1) Initialize the first moment vector m and the second moment vector V with zeros for each parameter:
m_0=V_0=0.
(2) Update the first moment estimate m using a weighted average of the current gradient g_t and the previous first moment estimate m_t-1:
m_t=β_1m_t-1+(1-β_1)g_t.
(3) Update the second moment estimate V using a weighted average of the squared gradients and the previous second moment estimate V_t-1:
V_t=β_2V_t-1+(1-β_2)g_t^2.
(4) Calculate the bias-corrected first and second moment estimate
to correct for their initialization bias:
m̂_t=m_t/1-(β_1)^t, V̂_t=V_t/1-(β_2)^t.
(5) Update the parameters ξ by moving in the direction of the first moment estimate, where the learning rate is τ divided by the square root of the second moment estimate:
ξ_t=ξ_t-1-τm̂_t/√(V̂_t)+ϵ.
The hyper-parameters in Adam include the learning rate and the decay rates for the moving averages. These hyper-parameters need to be tuned based on the specific problem and dataset to achieve optimal performance. Adam has several advantages that make it popular in deep learning:
(a) Adaptive learning rate: Adam automatically adapts the learning rate for each parameter based on the estimated first and second moments. This adaptive behavior helps in effectively navigating the optimization landscape and can lead to faster convergence.
(b) Efficiency: Adam uses the moving averages to maintain a history of gradients, which eliminates the need to store and compute gradients for each iteration separately. This makes Adam memory-efficient and allows for efficient parallelization during training.
(c) Robustness: Adam performs well across a wide range of optimization problems and is less sensitive to hyper-parameter tuning compared to some other optimizers. It can handle sparse gradients and noisy data effectively.
The proposed algorithm, which utilizes (<ref>) as the loss function, exhibits superior performance in recovering smooth solutions due to the high regularity of the PDEs residual term. This regularity term promotes smoother solutions and is an important factor in achieving higher accuracy.
Furthermore, the use of automatic differentiation (AD) implementations enables the efficient calculation of the necessary derivatives. This feature is a significant advantage of our approach as it allows for the accurate optimization of the objective function, which is crucial for effective solution of inverse problems. To validate the effectiveness of the proposed algorithm and to substantiate our claims, we conduct a series of numerical experiments.
§.§ Numerical experiments.
In this subsection, we will present several numerical examples for the spatial domain Ω⊂ℝ^d with d=2,3.
We define the following relative errors for exact solutions (u,q) and numerical approximations (u^*,q^*) as
Re_u:=u-u^*_L^2(Ω_T)/u_L^2(Ω_T), Re_q:=q-q^*_L^2(Ω)/q_L^2(Ω), Re_Δ u:=Δ u-Δ u^*_L^2(Ω_T)/Δ u_L^2(Ω_T).
Example 1 (two-dimensional experiment):
For equation (<ref>), we set the exact solution u and the domain Ω_T as
u(x,y,t)=(x^2+y^2+1)exp(t), (t,x,y)∈Ω_T=[0,1]^3.
The exact potential q is given as
q(x,y)=sin(π x)sin(π y).
The initial and boundary conditions can be calculated from the representation of u straightforwardly. The exact measurement will be
u(x,y,1)=φ(x,y)=(x^2+y^2+1)exp(1),
and in our experiments the noisy data is set as
φ^δ(x,y):=φ(x,y)+δ· (2 rand(shape(φ(x,y)))-1),
where rand(shape(φ)) is a random variable generated by uniform distribution in [0,1].
For the implementation details, we use a fully connected neural network for u_θ and q_η with 3 hidden layers, each with a width of 20.
We take
N=N_int+N_sb+N_tb+N_d=256+256×4+256+256=1792
as the number of collocation points, which are randomly sampled in four different domains, i.e., interior spatio-temporal domain, spatial and temporal boundary domain, and additional measurement domain. The activation function is tanh(x) and the hyper-parameter is λ=0.01. The number of training epochs is set to be 5×10^4, and the initial learning rates τ_θ, τ_η both start with 0.001 and shrink 10 times every 2×10^4 iteration.
The test sets are chosen by a uniform mesh
𝒯:={(t_k,x_i,y_j): k,i,j=0,1,⋯,49}⊂Ω_T.
Since the noisy level of the measurement data affects the reconstruction accuracy, in this simulation, we test the training performance for various noisy levels. Figure <ref> records the training process, i.e., the training loss, the relative error for the reconstruction of q and the relative error for the recovery of u with respect to the iterations for different noise levels δ=0.1%, 1%, 5%, 10% by the proposed scheme.
After training, we test the reconstruction result on test sets 𝒯.
The distribution of the temperature field u(x,t) also depends on the time t, Figure <ref> shows the time-series relative error of the recovered u with various noise levels after logarithmic re-scaling. As shown in these figures, the training performance deteriorates as the noise level of the measurement data increasing.
Figure <ref> shows the exact solution of the ptential term. Figure <ref> shows the reconstruction results for q(x) by optimizing the proposed loss function (first line) and the corresponding absolute pointwise error for various noisy level δ=0.1%,5%,10% (second line).
Meanwhile, Figure <ref> presents the reconstruction solution u (first line) and corresponding absolute pointwise error (second line) for various noisy level measurement data at t=1/7. We can see that the reconstruction accuracy for q deteriorates as the noise level of the measurement data increasing, but the performance for u is still satisfactory.
Table <ref> presents the recovery results solved by two schemes:
(I) the proposed frameworks with the loss function (<ref>),
(II) the DGM frameworks with the loss function (<ref>).
We record the generalization error of q, u and Δ u in L^2-error from the noisy input data with δ=0.01. Due to the random sampling of the training data points, the inversion results have some stochasticity.
Thus, we perform Algorithm <ref> with the loss function in the formulation (<ref>) and formulation (<ref>) five times, respectively.
The relative errors (mean and standard deviation) for the recovery of q, u and Δ u are shown in Table <ref>. As observed, optimizing the loss function proposed in this paper leads to more accurate recovery results, especially for the reconstruction of q compared with DGM frameworks. Moreover, although the reconstruction accuracy of u in L^2-error for both two frameworks are relatively close, the accuracy of Δ u in L^2-error for proposed scheme in this paper performs better. This suggests that the proposed frameworks are better able to capture smooth solutions.
Example 2 (two-dimensional experiment):
For equation (<ref>), we set the exact solution u and the domain Ω_T as
u(x,y,t)=texp(x+y), (t,x,y)∈Ω_T=[0,1]×[0,2]^2.
The exact potential q is given as
q(r) =
15(cos r-√(3)/2)+2, 0≤ r ≤π/6,
2, otherwise ,
r(x,y) =√((x-1)^2+(y-1)^2).
The exact measurement will be
u(x,y,1)=φ(x,y)=exp(x+y),
and the noisy data φ^δ is generated by (<ref>).
The network architectures and hyper-parameters such as activation function, balance hyper-parameter λ are all the same as Example 1.
The number of training epochs is set to be 1×10^5, and the initial learning rates τ_θ, τ_η both start with 0.001 and shrink 10 times every 2×10^4 iteration. The test sets are chosen by a uniform mesh as (<ref>)
In this simulation, we evaluate the training performance under various levels of measurement noise. The training process under Algorithm <ref> is recorded in Figure <ref>, which includes the training loss, and the relative errors for the reconstruction of q and u during training process for different noise levels (δ=0, 1%, 5%, 10%). Figure <ref> displays the exact potential function q, while the approximated q^* under different noise level measurements (δ=1%, 5%, 10%) are shown in Figure <ref>. We can see that the numerical reconstructions still satisfy the theoretical results even with zero initial condition and nonsmooth exact potential. This means that in numerical reconstructions we may release the conditions of Assumption <ref> to some extent.
Now, we start to verify the convergence of the iteration in Theorem <ref> with different neural network architectures. In the experiments, for a fixed number of per-layer neurons NN=20, we compute the reconstruction errors for q versus the noise level δ using logarithmic re-scaling with various hidden layers NL=3, 4, 6. The results of these experiments are presented in the left of Figure <ref>. The theoretical estimate
O(δ^1/3) is shown by the black line. Similarly, fixing hidden layer NL=6, the reconstruction errors for q under various per-layer neurons NN=10,15,20 are given in the right of Figure <ref>. From this figure, we see that the error could be bounded by the rate δ^1/3 to some extent, which supports the theoretical analysis in Theorem <ref>.
In order to evaluate the effectiveness of the proposed scheme in terms of hyper-parameters and network structure, a series of experiments are conducted. Specifically, we examine the impact of the balance hyper-parameter λ in (<ref>) and the network structure, including the number of hidden layers and neurons. For a fixed number of hidden layers (NL=3) and a fixed number of neurons per-layer (NN=20), we compute the reconstruction errors (mean and standard deviation) for q and u using various values of λ, such as
λ=10^j, -4≤ j≤ 1. The results of these experiments are presented in Table <ref>, which indicates that the performance of the inverse problem is highly dependent on the balance hyper-parameter λ. Specifically, we find that the relative reconstruction errors are optimized when λ is set to 10^-2. Furthermore, we observe that the reconstruction errors increase significantly as λ exceeds this optimal value. These results suggest that the selection of the balance hyper-parameter is critical to achieving good performance in this inverse problem.
Next we experiment with various combinations of hidden layers and neuron numbers for the inverse problem using Algorithm <ref>. We set the dimension to d=2 and try a total of 16 combinations of hidden layers (NL) and per-layer neuron numbers (NN), with
NL=3,6,9,14, NN=10,15,20,25.
For each combination, we run Algorithm <ref> for 1× 10^5 iterations and record the relative errors (mean and standard deviation) for (q,u) in Table <ref>.
It indicates that deeper (larger NL) and/or wider (larger NN) neural networks tend to yield lower reconstruction errors, although this causes higher computational cost. However, we also observe that for fixed neuron number NN, increasing the number of hidden layers NL, for example NL≥ 15, causes the algorithm fail to converge as the number of iterations increases. This suggests that increasing the number of layers and/or neurons can enhance the representation capacity of neural networks. But it may also introduce more parameters to train, and lead to longer training times and potential overfitting of the representation.
Example 3 (three-dimensional experiment):
We also take the following 3-dimensional experiment. We set the exact solution and the domain of equation (<ref>) as
u(x,y,t)=texp(x+y+z), (t,x,y,z)∈Ω_T=[0,1]^4.
The exact potential q is given as
q(x,y,z)=x+y+z.
We also employ fully-connected neural networks with NL=4, NN=20 for both u_θ and q_η. The number of training points are
N=N_int+N_sb+N_tb+N_d=256+256×6+256+256=2304,
which are randomly sampled from four different training sets.
The other network architectures and hyper-parameters such as activation function, balance hyper-parameter λ, number of training epochs, and initial learning rate are all the same as Example 1.
The test sets are chosen by a uniform mesh
𝒯:={(t_k,x_i,y_j,z_l): k,i,j,l=0,1,⋯,49}⊂Ω_T.
Figure <ref> shows the exact potential function q and the relative errors versus iterations during training process for different noise scales δ=0, 1%, 5%, 10%. Figure <ref> presents the potential functions q_η recovered from different noise levels and the corresponding point by point absolute errors on test sets. The inversion results are satisfactory and reasonable overall.
Finally, we conduct the experiments to evaluate the robustness of the proposed scheme in terms of network structure (number of hidden layers and per-layer nurons).
More specifically, we run Algorithm <ref> with per-layer neuron numbers NN=5,10,20,25 for fixed hidden layers NL=6 and with hidden layers NL=4,6,8,10 for fixed per-layer neuron numbers NN=25, respectively. The reconstruction errors are presented in Figure <ref>. It also seems that larger NL and/or larger NN neural networks yield lower reconstruction error. In this example, for fixed hidden layer NL=6, we test the per-layer neuron numbers NN≥ 30 and find that there will be bad reconstruction result, that is, the relative error for q with NL=6, NN=30 is larger than the case NN=10,20,25 as the iterations increasing. Therefore, more layers and/or neurons with much more parameters to train, yield longer training time, and may result in overfitting of the reconstruction.
§ CONCLUDING REMARKS.
In this work, a deep neural network-based reconstruction scheme has been proposed to solve an inverse potential problem in the parabolic equation. The proposed method has shown superior performance in high-dimensional space.
We prove the uniqueness of the inverse potential problem. A new loss function has been introduced, which includes regularization terms that depend on the derivatives of the residuals for both the partial differential equation and the measurement data. These regularization terms aim to address the ill-posedness of the inverse problem and enhance the regularity of the solution. Additionally, the mollification method has been employed to improve the regularity of the noisy data, where it can reduce the perturbation errors caused by numerical differentiation on the noisy data. Generalization estimates based on the conditional stability of linear inverse source problems and the mollification error estimate on noisy data have been established, which provide a measure of the stability and accuracy of the proposed method in solving the inverse potential problem. Numerical experiments have been conducted to evaluate the performance of proposed method, which indicate the efficiency the approach in this work.
§ ACKNOWLEDGMENTS
Mengmeng Zhang is supported by Foundation of Hebei University of Technology (Grant No.282022550) and Foundation of Tianjin Education Commission Research Program(Grant No.2022KJ102). Zhidong Zhang is supported by National Natural Science Foundation of China (Grant No. 12101627).
plainurl
|
http://arxiv.org/abs/2307.04555v1 | 20230710134037 | CIP-stabilized Virtual Elements for diffusion-convection-reaction problems | [
"L. Beirao da Veiga",
"C. Lovadina",
"M. Trezzi"
] | math.NA | [
"math.NA",
"cs.NA"
] |
| | |
Π^∇, E_k
0Π^0, E_k
Π^0_k
0PΠ^0, E_k-1
0PΠ^0, E_k
𝒮^E
𝒜_cip^E
ℱ_cip^E
𝒜_cip
ℱ_cip
𝒜_cip^E
ℱ_cip^E
𝒜_cip
ℱ_cip
v_ℐ
u_ℐ
e_ℐ
v_π
u_π
e_π
b^skew
b^skew,E
b^skew_h
b^skew,E_h
b^E_o, h
b^E_∂, h
_[L^∞(Ω)]^2
_[L^∞(E)]^2
γ_E
cip
cip^*
η_ℱ^E
η_ℱ,1^E
η_ℱ,2^E
η_a^E
hπ( _h ·∇v_h )
h _h ·∇v_h
η_b^E
η_c^E
η_J^E
η_N^E
η_b, o^E
η_b, ∂^E
η_b,A^E
η_b,B^E
η_b,1a^E
η_b,1b^E
η_b,1c^E
η_b,1d^E
η_b,1e^E
η_b,1f^E
η_b,1g^E
η_b,1h^E
η_b,2a^E
η_b,2b^E
η_b,2c^E
η_b,2d^E
η_b,2f^E
η_b,2g^E
η_b,2h^E
η_b,1^E
η_b,2^E
η_b,3^E
η_b,4^E
η_b,5^E
η_b,6^E
η_b,i^E
η_N,a^E
η_N,b^E
η_N,c^E
η_N,d^E
η_ℬ^E
η_ℬ,1^E
η_ℬ,2^E
η_ℬ,3^E
η_ℒ^E
η_ℒ,1^E
η_ℒ,2^E
s
sistema
{[ .
definition
definizioneDefinition[section]
remark
remarkRemark[section]
remark
testTest[section]
plain
theoremTheorem[section]
propositionProposition[section]
corollaryCorollary[section]
lemmaLemma[section]
definitionDefinition[section]
citazione
1,2]L. Beirão da Veiga [email protected]
3]C. Lovadina [email protected]
4]M. Trezzi [email protected]
[1]Dipartimento di Matematica e Applicazioni,
Università degli Studi di Milano Bicocca,
Via Roberto Cozzi 55 - 20125 Milano, Italy
[2]IMATI-CNR, Via Adolfo Ferrata 5 - 27100 Pavia, Italy
[3]Dipartimento di Matematica “F. Enriques”,
Università degli Studi di Milano,
Via Cesare Saldini 50 - 20133 Milano, Italy
[4]Dipartimento di Matematica “F. Casorati”,
Università di Pavia,
Via Adolfo Ferrata 5 - 27100 Pavia, Italy
CIP-stabilized Virtual Elements for diffusion-convection-reaction problems
[
August 12, 2023
==========================================================================
The Virtual Element Method for diffusion-convection-reaction problems is considered. In order to design a quasi-robust scheme also in the convection-dominated regime, a Continuous Interior Penalty approach is employed. Due to the presence of polynomial projection operators, typical of the Virtual Element Method, the stability and the error analysis requires particular care, especially in treating the advective term. Some numerical tests are presented to support the theoretical results.
§ INTRODUCTION
The Virtual Element Method (VEM) is a fairly recent methodology for the discretization of problems in partial differential equations <cit.>, which can be interpreted as a generalization of classical Finite Elements (FEM) to meshes of much more general shape. Since its birth, the VEM has enjoyed a large success and been applied to a very wide range of problems; we here limit ourselves in mentioning the recent special issue <cit.> and the review paper <cit.>.
The focus of the present article is on the classical diffusion-reaction-advection scalar problem.
Under suitable assumptions on the data, this is a standard “textbook” elliptic problem without any particular difficulty. On the other hand, it is well known that, whenever the advective term dominates (in particular over the diffusive one) a classical FEM approach will lead to very large errors and oscillations in the discrete solution, unless an extremely fine mesh is adopted. There is a large FEM literature on the subject, offering a list of possible stabilized methods which are robust in this respect. From the theoretical standpoint, a method is typically called quasi-robust if, assuming sufficiently regular solution and data, it yields error estimates which are uniform with respect to the diffusion parameter in a norm including also some direct control on the convective term. Some well known approaches are upwind Discontinuous Galerkin schemes <cit.>, Streamline Upwind Petrov-Galerkin and variants <cit.>, Continuous Interior Penalty (CIP) <cit.>, Local Projection Stabilization <cit.>. Finally, one must note that the diffusion-reaction-advection problem serves also as a model for more complex problems in fluid mechanics, such as the Navier-Stokes equation.
The Virtual Element Method is particularly suitable in the context of advection dominated problems due to the flexibility of the mesh construction and its handling. For instance, VEM allows more local refinement procedures and easy an gluing of fine meshes with coarser ones (this latter feature is very useful in the presence of layers, for example). In addition, VEM offers a more efficient discretization of complex domains, which is greatly useful in applications such as reservoir <cit.> and fracture-network simulations <cit.>, where diffusion-reaction-advection equations play a crucial role. Unfortunately, due to the presence of projection operators which may alter the structure of the convective term, it is not easy to devise and analyze quasi-robust VEM schemes. Exceptions are the SUPG and LPS approaches detailed in <cit.> and <cit.>, respectively (regarding other polygonal technologies, see for instance <cit.>).
Since three of the most popular stabilization techniques, namely SUPG, LPS and CIP, have their own strongly defined set of assets/drawbacks, broadening the available approaches with CIP schemes is important for the VEM technology.
The purpose of the present contribution is exactly to fill this gap and develop CIP (Continuous Interior Penalty) stabilized VEM method, providing also a theoretical error analysis. Of course, our method combines VEM stabilization terms (to deal with polygonal meshes) and CIP-like terms (to deal with the avdection-dominated regime).
Furthermore, it is worth noticing that the backstage complex nature of CIP, which is a “minimal stabilization” as it adds the minimal positive term guaranteeing control on piecewise polynomial convection, makes the analysis in the VEM setting particularly interesting and challenging. Assuming, as it happens in most publications on the subject, a uniformly positive reaction term, we are able to develop quasi-robust error estimates for our method. In the absence of reaction, we are able to show some improved error estimates (over a non-stabilized scheme), but only under a piecewise polynomial convection data assumption. The paper ends with a set of numerical tests showing the actual robustness of the method and comparing it with the non-stabilized approach.
The paper is organized as follows. After presenting the continuous and discrete problems in Section <ref>, we develop the stability and convergence analysis in Section <ref>. Finally, numerical tests are shown in Section <ref>.
Throughout the paper, we use standard notations for Sobolev norms and semi-norms. Moreover, C and C_i will denote quantities, independent of the meshsize h, which may vary at each occurrence. We will make extensive use of the notation a ≲ b (a and b being non-negative quantities) to mean a≤ C b.
§ THE CONTINUOUS AND THE DISCRETE PROBLEMS
In this Section we deal with the continuous problem and its discretization by means of the Virtual Element Method.
§.§ Continuous Problem
We consider the following steady advection-diffusion-reaction problem:
{
- ϵΔ u + ·∇ u + σ u
= f in Ω ,
u_|Γ = 0 ,
.
where Ω⊂^2 is a polygonal domain of boundary Γ. Above, ϵ > 0 is the diffusion coefficient (assumed to be constant), while
∈ [W^1,∞(Ω)]^2 is the advection field such that = 0. Moreover,
σ>0 is the reaction constant (except for Section <ref>, where σ=0); we remark that we assume σ to be a positive constant since the extension to the case 0<σ∈ L^∞(Ω), with σ^-1∈ L^∞(Ω), is trivial.
Finally, f ∈ L^2(Ω) is the source term.
The domain boundary will be split into two non-overlapping regions:
Γ_in{∈Γ | (() ·) < 0 } and Γ_out{∈Γ | (() ·) ≥ 0 } ,
where is the outward unit normal vector to the boundary.
A variational formulation of problem (<ref>) reads as follows:
{ find u ∈ V H^1_0(Ω) such that:
ϵ a(u,v) + (u,v) + σ c(u,v) = ∫_Ω f v dΩ .
.
The bilinear forms
a(·, ·) V × V → ,
(·, ·) V × V →
and
c(·, ·) V × V →
are defined as
a(u, v) ∫_Ω∇ u ·∇ v dΩ for all u, v ∈ V,
(u,v) 12(b(u, v) - b(v, u)) with b(u, v) := ∫_Ω (·∇ u) v dΩ for all u, v ∈ V,
c(u, v) := ∫_Ω u v dΩ for all u, v ∈ V.
It is well known that when ϵ is small with respect to and/or to σ, standard discretizations of (<ref>) typically return unsatisfactory numerical solutions with spurious oscillations.
To overcome these difficulties, several strategies are available in the literature.
In this paper we take advantage of the so-called Continuous Interior Penalty (CIP) strategy, introduced in
<cit.> in a Finite Element framework.
From now on, we assume that the material parameters are scaled so that we have
=1 .
§.§ Preliminary notations and results
We start considering a sequence Ω_h_h of tessellations of Ω into non-overlapping polygons E.
We denote with e a general edge of E, while
|E| and h_E are the area and the diameter of E, respectively.
Furthermore, ^E is the unit outward normal vector to the boundary ∂ E.
As usual, h sup_E∈Ω_hh_E denotes the mesh parameter.
We suppose that Ω_h_h fulfils the following assumption:
(A1) Mesh assumption.
There exists a positive constant ρ such that for any E ∈Ω_h_h
* E is star-shaped with respect to a ball B_E of radius ≥ ρ h_E;
* any edge e of E has length ≥ ρ h_E;
* the mesh is quasi-uniform, any polygon has diameter h_E ≥ρ h.
We now introduce some basic tools and notations useful in the construction and the theoretical analysis of Virtual Element Methods.
Using standard VEM notations, for n ∈, m∈ and p∈ [1, +∞], and for any E ∈Ω_h,
let us introduce the spaces:
* _n(E): the set of polynomials on E of degree ≤ n (with _-1(E)={ 0 }),
* _n(Ω_h) := {q ∈ L^2(Ω) s.t q|_E ∈_n(E) for all E ∈Ω_h},
* W^m_p(Ω_h) := {v ∈ L^2(Ω) s.t v|_E ∈ W^m_p(E) for all E ∈Ω_h} equipped with the broken norm and seminorm
v^p_W^m_p(Ω_h) := ∑_E ∈Ω_hv^p_W^m_p(E) ,
|v|^p_W^m_p(Ω_h) := ∑_E ∈Ω_h |v|^p_W^m_p(E) ,
if 1 ≤ p < ∞,
v_W^m_p(Ω_h) := max_E ∈Ω_hv_W^m_p(E) ,
|v|_W^m_p(Ω_h) := max_E ∈Ω_h |v|_W^m_p(E) ,
if p = ∞,
and the following polynomial projections:
* the L^2-projection Π_n^0, E L^2(E) →_n(E), given by
∫_Eq_n (v - Π_n^0, E v) d E = 0 for all v ∈ L^2(E) and q_n ∈_n(E),
with obvious extension for vector functions Π^0, E_n [L^2(E)]^2 → [_n(E)]^2;
* the H^1-seminorm projection Π_n^∇,E H^1(E) →_n(E), defined by
{ ∫_E q_n · ( v - Π_n^∇,E v) d E = 0 for all v ∈ H^1(E) and q_n ∈_n(E),
∫_∂ E(v - Π_n^∇, E v) ds= 0 ,
.
with global counterparts
Π_n^0 L^2(Ω) →_n(Ω_h) and
Π_n^∇ H^1(Ω_h) →_n(Ω_h)
defined by
(Π_n^0 v)|_E = Π_n^0,E v ,
(Π_n^∇ v)|_E = Π_n^∇,E v ,
for all E ∈Ω_h.
We finally mention one classical result for polynomials on star-shaped domains (see for instance <cit.>).
Under the assumption (A1), for any E ∈Ω_h and for any smooth enough function ϕ defined on E, it holds
ϕ - Π^0,E_n ϕ_W^m_p(E)≲ h_E^s-m |ϕ|_W^s_p(E) s,m ∈, m ≤ s ≤ n+1, p=1, …, ∞,
ϕ - Π^∇,E_n ϕ_m,E≲ h_E^s-m |ϕ|_s,E s,m ∈, m ≤ s ≤ n+1, s ≥ 1,
∇ϕ - Π^0,E_n∇ϕ_m,E≲ h_E^s-1-m |ϕ|_s,E s,m ∈, m+1 ≤ s ≤ n+1, s ≥ 1.
§.§ Virtual Element spaces
Given a polygon E and a positive integer k, we define the local “enhanced” virtual element space as
V_h(E) =
{
v_h ∈ H^1(E) ∩ C^0(∂ E) s.t.
v_h|_e ∈_k(e) for all e ∈∂ E, .
.
Δ v_h ∈_k(E) ,
(v - v, p_k ) = 0 for all p_k ∈_k(E) / _k-2(E)} .
For the finite dimensional space V_h(E), one can check that the following linear operators are a set of DoFs:
* : the pointwise values of v_h at the vertexes of the polygon E,
* : the values of v_h at k-1 internal points of a Gauss-Lobatto quadrature for every edge e ∈∂ E,
* : the moments 1| E |∫_E v_h m_αβ d E, ∀ m_αβ∈ℳ_k-2(E) where ℳ_k-2(E) is the set of monomials defined as
ℳ_k-2{
m_αβ( x - x_Eh_E)^α( y - y_Eh_E)^β α,β∈ℕ , α + β≤ k - 2
}.
Thanks to these DoFs, it is possible to compute the following projections:
V_h(E) →_k(E), 0 V_h(E) →_k(E), 0P ∇ V_h(E) → [_k(E)]^2 .
Gluing together the local spaces, we define the global virtual element space as
V_h(Ω_h) = {v_h ∈ V s.t. v_h|_E ∈ V_h(E) for all E ∈Ω_h} ,
with the associated set of degrees of freedom:
* : the values of v_h at the vertices;
* : the values of v_h at k-1 points on each edge e;
* : the moments up to order k-2 for each element E∈Ω_h.
We finally recall from <cit.> the optimal approximation property for the space V_h(Ω_h).
Under the assumption (A1) for any v ∈ V ∩ H^s+1(Ω_h) there exists ∈ V_h(Ω_h) such that for all E ∈Ω_h it holds
v - _0,E + h_E ∇ (v - )_0,E≲ h_E^s+1 |v|_s+1,E ,
where 0 < s ≤ k.
§.§ Virtual Element Forms and the Discrete Problem
We start observing that the bilinear forms
a(·,·) ,
(·,·)
and
c(·,·), see (<ref>), (<ref>) and (<ref>),
can be obviously decomposed into local contributions
a(u, v) ∑_E ∈Ω_h a^E(u, v) ,
(u, v) ∑_E ∈Ω_h(u, v) ,
c(u, v) ∑_E ∈Ω_h c^E(u, v) .
Using the DoFs introduced in Section <ref>, we construct a computable counterpart of the above-mentioned forms, following the standard VEM procedure.
Hence, we define the bilinear form a_h^E(·, ·) V_h(E) × V_h(E) → as follows:
a_h^E(u_h, v_h) :=
∫_E 0P ∇ u_h ·0P ∇ v_h dE +
((I - ) u_h, (I - ) v_h) .
Above, the stabilizing bilinear form V_h(E) × V_h(E) → is required to be computable and to satisfy
α_*|v_h|_1,E^2 ≤(v_h, v_h) ≤α^* |v_h|_1,E^2 ,
for all v_h ∈ Ker() ,
for two positive uniform constants α_* and α^*.
In what follows, we choose the stabilization (cf. <cit.>, for instance), which is a common choice for VEM approach.
Following <cit.>, we replace the bilinear form b^E(·, ·) V_h(E) × V_h(E) →
with b_h^E(·, ·), defined as
b_h^E(u_h, v_h) ∫_E ·∇0 u_h 0 v_h dE +
∫_∂ E (·^E) (I - 0) u_h v_h ds .
In the numerical scheme, we will employ the skew-symmetrized form (cf. (<ref>)):
(u_h , v_h) = 1/2( b_h^E(u_h, v_h) - b_h^E(v_h, u_h) ) .
The reaction term is locally replaced by c_h(· , ·) V_h(E) × V_h(E) →, defined as
c_h^E(u_h, v_h) ∫_E 0 u_h 0 v_h dE +
|E| ((I - 0) u_h, (I - 0) v_h) .
Following <cit.>, we now introduce a VEM version of the local CIP-stabilization form,
defined as
J_h^E(u_h,v_h)
∑_e ⊂∂ Eγ2∫_e h_e^2 [∇ u_h] · [∇ v_h] ds
+
γ h_E ((I - ) u_h, (I - ) v_h) ,
where [∇·] denotes the gradient jump across e. If e is a boundary edge we set [∇· ] = 0. The parameter γ is defined as
γ (∂ E) ·^e _L^∞(∂ E) ,
where ^e is one of the two outward normal vectors to e. Since we will work with the assumption _[L^∞(Ω)]^2 = 1, we will treat γ as a constant.
Moreover, we impose the Dirichlet boundary conditions by using a Nitsche-type technique. To this aim, we define the local forms:
𝒩_h^E(u_h,v_h)
- ϵ⟨∇ u_h ·^E, v_h ⟩_Γ_E
+ ϵδ h_E⟨ u_h,v_h ⟩_Γ_E
+ 12⟨|·| u_h, v_h ⟩_Γ_E ,
where Γ_E = ∂ E ∩Γ, δ is a positive parameter to be chosen and ⟨· , ·⟩ is the L^2(Γ_E)-scalar product.
The standard definition of Nitsche's method also considers a term
- ϵ⟨ u_h, ∇ v_h ·^E⟩_Γ_E .
Since we are not interested in achieving symmetry and in order to simplify the analysis of the method, we drop this term. Another difference with the standard formulation of Nitsche's method is the convective term. Usually, it is locally defined as
- ⟨ (·^E) u_h, v_h ⟩_Γ_in∩Γ_E .
By integration by parts, in the definition of
(·,·), we should consider also
12⟨ (·^E) u_h,v_h ⟩_Γ_E .
Summing the last two terms, we recover our definition of 𝒩_h^E(·, ·).
Summing all of these contributions, we construct the discrete bilinear form ^E V_h(E) × V_h(E) → as
^E(u_h, v_h) = ϵ a_h^E(u_h , v_h) + (u_h , v_h) + σ c_h^E(u_h , v_h) + 𝒩_h^E(u_h,v_h) + J_h^E(u_h , v_h) ,
and summing over all the polygons we obtain the global versions of the bilinear forms
a_h(u_h, v_h) := ∑_E ∈Ω_h a_h^E(u_h, v_h) ,
(u_h, v_h) := ∑_E ∈Ω_h(u_h, v_h) ,
c_h(u_h, v_h) := ∑_E ∈Ω_h c_h^E(u_h, v_h) ,
J_h(u_h, v_h) := ∑_E ∈Ω_h J_h^E(u_h, v_h) ,
𝒩_h(u_h, v_h) := ∑_E ∈Ω_h𝒩_h^E(u_h, v_h) ,
and
(u_h, v_h) ∑_E ∈Ω_h (u_h, v_h) .
The discrete local and global load terms (here ℱ_h^E V_h(E) →) are
ℱ_h^E(v_h) ∫_E f 0 v_h ,
ℱ_h(v_h) := ∑_E ∈Ω_hℱ^E_h(v_h) .
Finally, the discrete problem reads as:
{ find u_h ∈ V_h(Ω_h) s.t.
(u_h, v_h) = ℱ_h(v_h) for all v_h ∈ V_h(Ω_h)..
§.§ Consistency of the method
Due to the polynomial projections entering in (<ref>), it is easily seen that, as usual for the VEMs, the solution u of the continuous problem (<ref>) does not solve the discrete scheme (<ref>) (thus, strong consistency does not hold). However, if u is more regular, say u∈ H^2(Ω)∩ H^1_0(Ω), then it holds:
(u , v_h) = ℱ̃(v_h) for all v_h ∈ V_h(Ω_h) .
where
(u,v_h)
∑_E ∈Ω_h (u, v_h) , ℱ̃(v_h) ∑_E ∈Ω_hℱ̃^E(v_h) ,
and the local forms are defined as follows.
∙ (u, v_h)
ϵ a^E(u, v_h)
+ (u, v_h)
+ σ c^E(u, v_h)
+𝒩̃_h^E (u, v_h)
+ J̃_h^E(u, v_h) ,
with
𝒩̃_h^E(u,v_h)
- ϵ⟨∇ u ·^E, v_h ⟩_Γ_E
+ ϵδ h_E⟨ u,v_h ⟩_Γ_E
+ 12⟨|·| u, v_h ⟩_Γ_E ,
where Γ_E = ∂ E ∩Γ, and
J̃_h^E(u, v_h)
12∑_e ⊂∂ Eγ∫_e h_e^2 [∇ u] · [∇ v_h] ds
=
12∑_e ⊂∂ Eγ∫_e h_e^2 [∇ u ·^e] [∇ v_h ·^e] ds ;
∙ ℱ̃^E(v_h) ∫_E f v_h .
§ STABILITY AND CONVERGENCE ANALYSIS
§.§ Preliminary results
Before proving the stability of the discrete problem, we mention some preliminary results that are useful for our purposes. The first one is a standard inverse estimate for the virtual element functions.
Under the assumption (A1), for any E ∈Ω_h, there exists a uniform positive constant such that
| v_h |_1,E≲ h_E^-1 v_h _0,E for all v_h ∈ V_h(Ω_h) .
We also recall, see <cit.>, the following inverse trace inequality.
Under the assumption (A1), for any E ∈Ω_h and for every v_h ∈ V_h(E) such that
Π^0,E_k-2 v_h ≡ 0,
it holds that
v_h _0,E≲
h_E^1/2 | v_h |_0, ∂ E .
We now construct a VEM version of the Oswald interpolation operator, see for instance <cit.> for the FEM framework.
We consider a point ν associated to a DoF in or and
we define E_ν⋃{ E ∈Ω_h s.t ν∈ E}, i.e. the union of the set of all the elements that contain the point ν.
The quasi-interpolation operator π for a sufficiently regular function v is defined as
π v= ∑_ν∈∪λ_ν (v) ϕ_ν + ∑_E ∈Ω_h∑_α +β≤ k-2μ_αβ^E(v) φ_αβ^E ,
where {φ_ν}_ν∈∪ are the canonical basis functions associated to the degree of freedom pointed at {ν}_ν∈∪ and the coefficients {λ_ν (v)} are defined as
λ_ν (v)1| E_ν|∑_ E ⊆ E_ν v^E(ν) |E| .
Above, and from now on in this section, a superscript E for a function denotes the restriction of that function to the element E.
Similarly, above {φ_αβ^E} denote the basis functions associated to the degrees of freedom , and {μ_αβ^E(v)} are the associated coefficients corresponding to v, defined as (cf. (<ref>)):
μ_αβ^E(v) = 1| E |∫_E v m_αβ d E .
We are ready to prove the following estimate concerning the interpolation error for piecewise polynomial functions. A FEM version of this result can be found in <cit.>.
Under assumption (A1), for every E ∈Ω_h it holds
(I - π) p _0,E≲ h^1/2∑_e ∈ℱ_E [ p ] _0,e for all p ∈_k(Ω_h) ,
where ℱ_E { e ∈ℰ s.t e ∩∂ E ∅} is the set of the edges with at least one endpoint which is a vertex of E.
We introduce the difference
δ (I - π ) p .
We restrict our attention to an element E∈Ω_h, and consider δ^E.
Since the DoFs in belong to one element only, we observe that for δ^E only the DoFs arising from and (i.e. the ones on the mesh skeleton), are involved.
Hence, noting that δ^E ∈ V_h(E) and Π^0,E_k-2δ^E =0 we can apply Lemma <ref>:
δ^E _0,E≲ h^1/2δ^E _0,∂ E≲ h δ^E _∞,∂ E .
Since the basis function associated to and are scaled in a way that their L^∞-norm is equal to 1, we have that
h δ^E _∞,∂ E≲ h
max_ν∈∪ | δ^E (ν) | .
Exploiting the definition of the Oswald interpolant, we observe that if ν∈ is not on the boundary, we have that
δ^E (ν) =
p^E (ν) - (π p)^E(ν) =
1| E ∪ E' |(
| E ∪ E' | p^E(ν) - | E | p^E(ν) - |E'| p^E'(ν)
)
=
c (p^E(ν) - p^E'(ν)) = c [p](ν) ,
where E' is the second element that shares the node ν.
Thanks to the mesh assumptions (A1), all the values
c
=
| E ∪ E' | - | E || E ∪ E' |
=
| E' || E ∪ E' |≈12 >0 .
are uniformly bounded from below and they do not depend on h; hence it holds
max_ν∈ | δ^E (ν) | ≲max_ν∈ | [p](ν) | .
If ν∈, a similiar computation allows to bound | δ^E (ν) | by means of the jumps of p at the nodes on the edges containing ν (this set is denoted by 𝒩_ν here below):
| δ^E (ν) | ≲max_ν' ∈𝒩_ν | [p](ν') | .
Combining (<ref>) and (<ref>), we get
h max_ν∈∪ | δ^E(ν) | ≲ h max_ν∈ e , e ∈ℱ_E | [p](ν) | ≲ h || [p] ||_∞,ℰ(E) ,
where ℰ(E) ⋃_e∈ℱ_E e. Since an inverse estimate gives
h || [p] ||_∞,ℰ(E)≲ h^1/2 || [p] ||_0,ℰ(E)≲
h^1/2∑_e ∈ℱ_E [p] _0,e ,
from (<ref>), (<ref>), (<ref>) and (<ref>) we obtain
(I - π) p _0,E = δ^E _0,E≲ h^1/2∑_e ∈ℱ_E [p] _0,e .
Under assumption (A1), for every E ∈Ω_h it holds
π p _0,E≲ p _0,𝒟(E) for all p ∈_k(Ω_h) ,
where 𝒟(E) ⋃{ K ∈Ω_h s.t. E̅∩K̅∅}.
Using triangular inequality, we obtain
π p _0,E≤ p _0,E + (I-π) p _0,E .
Thanks to Proposition <ref>, we control the second term with the jumps
(I-π) p _0,E≲
h^1/2∑_e ∈ℱ_E [ p ] _0,e
.
Thanks to the polynomial trace inequality, we conclude
(I-π) p _0,E≲ p _0,𝒟 (E) ,
hence
π p _0,E≤ p _0,E + (I-π) p _0,E≲ p _0,𝒟(E) .
§.§ Stability of the discrete problem
We start the theoretical analysis for the proposed method by introducing the local VEM-CIP norm
v_h^2_ , E
:=
ϵ ∇ v_h ^2_0,E +
h ·∇0 v_h ^2_0,E +
σ v_h ^2_0,E +
‖ξ (ϵ, ) v_h ‖^2_0,Γ_E +
J_h^E(v_h,v_h) ,
where
ξ (ϵ, )
(
ϵδ h +
12|·|)^1/2 ,
with global counterpart
v_h^2_∑_E ∈Ω_hv_h^2_, E .
The following two lemmas will be useful to prove the stability of the method.
Under assumptions (A1), given v_h∈ V_h(Ω_h), it holds
(v_h, v_h) ≳ϵ∇ v_h _0^2
+
J_h(v_h, v_h)
+
σ v_h _0^2
+
‖ξ (ϵ, ) v_h ‖^2_0,Γ .
We proceed locally, on each E∈Ω_h.
Thanks to the skew-symmetry property of
b_h^skew,E(·,·), testing the quadratic form
(·,·)
with v_h, we obtain
-ϵ⟨∇ v_h ·, v_h ⟩_Γ_E
+
ϵ∇ v_h _0,E^2
+
J^E_h(v_h, v_h)
+
σ v_h _0,E^2
+
‖ξ (ϵ, ) v_h ‖^2_0,Γ_E≲(v_h, v_h) .
We now handle the non-symmetric first term in (<ref>). Thanks to the Cauchy-Schwarz inequality and the Young's inequality for a positive constant α to be chosen, we have that
ϵ‖∇ v_h ‖^2_0,E
-
ϵ⟨∇ v_h ·, v_h ⟩_Γ_E
+
ϵδ h‖ v_h ‖_0,Γ_E^2
≥ϵ‖∇ v_h ‖^2_0,E
-
h ϵ2 α‖∇ v_h ·‖_0,Γ_E^2
+
( 1δ - α2) ϵh‖ v_h ‖^2_0,Γ_E .
Using the polynomial trace inequality, under the assumptions (A1), we have that
h ‖∇ v_h ·‖_0,Γ_E^2
≤
C_t ‖∇ v_h ‖_0,E^2
≤
C_t ‖∇ v_h ‖^2_0,E ,
for a uniform positive constant C_t. Hence, if we set α = C_t and 0 < δ < 2/C_t, we obtain
ϵ‖∇ v_h ‖^2_0,E
-
ϵ⟨∇ v_h ·, v_h ⟩_Γ_E
+
ϵδ h‖ v_h ‖_0,Γ_E^2
≳ϵ‖∇ v_h ‖^2_0,E +
ϵδ h‖ v_h ‖^2_0,Γ_E .
Inserting this in (<ref>), we obtain
ϵ∇ v_h _0,E^2
+
J^E_h(v_h, v_h)
+
σ v_h _0,E^2
+
‖ξ (ϵ, β) v_h ‖^2_0,Γ_E≲(v_h, v_h) .
Summing over all to elements E ∈Ω_h, we get the control of the symmetric terms in
·_:
ϵ∇ v_h _0^2
+
J_h(v_h, v_h)
+
σ v_h _0^2
+
‖ξ (ϵ, β) v_h ‖^2_0,Γ≲(v_h, v_h) .
Given v_h∈ V_h(Ω_h), let us set
w_h ,
where _h is the L^2-projection of onto the space of piecewise linear functions _1(Ω_h). Then, under assumptions (A1), if ϵ < h it holds
(v_h, w_h)
≥
C_1 h ·∇ v_h ^2_0,Ω
-C_2 (v_h, v_h) .
Thanks to Lemma <ref>, we first notice that
π (_h ·∇ v_h) _0,E≲_h ·∇ v_h _0,𝒟(E) ,
an estimate which will be frequently used in the sequel.
Recalling (<ref>), we locally have
(v_h, w_h)
=
ϵ a_h^E (v_h, )
+ J_h^E (v_h, )
+
σ c_h^E(v_h, )
+
𝒩^E_h(v_h, )
+
(v_h, )
= T_1 + T_2 + T_3 + T_4 + T_5 .
We consider each of the five terms in this equation.
Estimate for (𝐓_1).
Using Cauchy-Schwarz inequality, Lemma <ref>, estimate (<ref>) and recalling that ϵ < h, we get
ϵ a_h^E (v_h, )
≥
- ϵ a_h^E(v_h, v_h)^1/2 a_h^E(, )^1/2
≳
-ϵ^1/2∇ v_h _0,E ϵ^1/2||_1,E
≳
-ϵ^1/2∇ v_h _0,E ϵ^1/2 h^-1_0,E
≳
-ϵ^1/2∇ v_h _0,E h^1/2_h ·∇ v_h _0,𝒟(E) .
Estimate for (𝐓_2).
For the jump operator
J_h^E(·, ·),
we use again Cauchy-Schwarz inequality
J_h^E(v_h, )
≥
-J_h^E(v_h, v_h)^1/2 J_h^E(, )^1/2 .
Thanks to the trace inequality for polynomials, Lemma <ref> and estimate (<ref>), we obtain (w_h =):
J_h^E(w_h, w_h)
=
γ2∑_e ⊂∂ E∫_e h_e^2 [∇ w_h]^2 ds
+
γ h_E _J ( ( I - ) w_h, ( I - ) w_h )
≲
h ∇0 w_h _0,𝒟(E)^2
+ h_E ||_1,E^2
≲
h^-1 0 w_h _0,𝒟(E)^2 + h^-1_0,E^2
≲
h^-1_0,𝒟(E)^2
≲
h _h ·∇ v_h _0,𝒟(𝒟(E))^2 ,
where 𝒟(𝒟(E)) := ∪_E'⊆𝒟(E)𝒟(E').
Therefore, it holds
J_h^E(v_h, )
≳
-J_h^E(v_h, v_h)^1/2 h^1/2 _h ·∇ v_h _0,𝒟(𝒟(E)) .
Estimate for (𝐓_3).
Using a similar procedure, we control the bilinear form
c_h(·,·)
in this way
σ c_h^E(v_h, )
≳
- σ v_h _0,E _0,E
≳
- v_h _0,E h^1/2_h ·∇ v_h _0,𝒟(E) .
where we used h^1/2≲ 1 to simplify later developments.
Estimate for (𝐓_4).
For the Nitsche term, we have that
𝒩^E_h(v_h,w_h) =
- ϵ⟨∇ v_h ·^E, w_h ⟩_Γ_E + ϵδ h_E⟨ v_h, w_h ⟩_Γ_E
+ 12⟨|·| v_h, w_h ⟩_Γ_E .
We consider each of the three terms above. Using Cauchy-Schwarz inequality, trace inequality, ϵ<h and inverse estimate, the first term is estimated by
ϵ⟨∇ v_h ·^E, ⟩_Γ_E ≳
- ϵ h^-1/2‖∇ v_h ‖_0,E
h^-1/2‖‖_0,E
≳
-ϵ^1/2∇ v_h _0,E h^1/2_h ·∇ v_h _0,𝒟(E) .
For the second term we have
ϵδ h_E⟨ v_h, ⟩_Γ_E ≳
-ϵδ h‖ v_h ‖_0,Γ_E h^-1/2‖‖_0,E
≳
-ϵ^1/2δ h^1/2‖ v_h ‖_0,Γ_E h^1/2‖π ( _h ·∇ v_h) ‖_0,E
≳
- ‖ξ (ϵ, ) v_h ‖_0,Γ_E h^1/2_h ·∇ v_h _0,𝒟(E) .
For the last one, using the same estimates, we get
12⟨|·| v_h, ⟩_Γ_E ≳
- ‖ξ (ϵ, ) v_h ‖_0,Γ_E h^1/2_h ·∇ v_h _0,𝒟(E) .
Hence it holds
𝒩^E_h(v_h, w_h) ≳ - ( ϵ^1/2‖∇ v_h ‖_0,E + ‖ξ (ϵ, ) v_h ‖_0,Γ_E) h^1/2‖_h ·∇ v_h ‖_0,𝒟(E) .
Estimate for (𝐓_5). It is the most involved term.
The skew term (v_h, w_h) is composed by two parts
(v_h, w_h)
=
12 ( b_h^E(v_h, w_h) - b_h^E(w_h, v_h)) ,
and we consider each of these two terms separately.
The first term is defined as
b_h^E(v_h, w_h)
=
(·∇0 v_h, 0 w_h )_0,E +
((·^E) ( I - 0 ) v_h, 0 w_h)_0,∂ E .
We split the first term of (<ref>) as
( ·∇0 v_h, 0 w_h )_0,E =
( ·∇0 v_h, w_h )_0,E
+
( ·∇0 v_h, ( 0 - I ) w_h )_0,E
=
( ·∇0 v_h, h _h ·∇0 v_h )_0,E
+
( ·∇0 v_h, w_h - h _h ·∇0 v_h )_0,E
+
(·∇0 v_h, (0 - I) w_h)_0,E
η__1 + η__2 + η__3 .
We estimate each of these three quantities.
For the first term we have
η__1 =
(·∇0 v_h, )_0,E
=
h ·∇0 v_h ^2_0,E
+
(·∇0 v_h, h (_h - )·∇0 v_h)_0,E
≥
h ·∇0 v_h ^2_0,E
- C h^1/2·∇0 v_h _0,E h^1/2 | |_W^1,∞(E) h ∇0 v_h _0,E
≥
h ·∇0 v_h ^2_0,E
- C h^1/2·∇0 v_h _0,E h^1/2 | |_W^1,∞(E) v_h _0,E
Recalling (<ref>) and by Young's inequality we get:
η__2 = h ( ·∇0 v_h, (π - I) (_h ·∇0 v_h) )_0,E
≥
- h2·∇0 v_h ^2_0,E
- h2 (π - I) (_h ·∇ v_h) ^2_0,E .
Since _h is piecewise linear, for the second term we can use Proposition <ref> and obtain
h (π - I) (_h ·∇ v_h) _0,E^2
≲
h^2 ∑_e ⊂ℱ_E [_h ·∇ v_h] _0,e^2
≲
h^2 ∑_e ⊂ℱ_E [( _h - ) ·∇ v_h] _0,e^2
+
h^2 ∑_e ⊂ℱ_E [·∇ v_h] _0,e^2
≲
h^2 ∑_e ⊂ℱ_E [( _h - ) ·∇ v_h] _0,e^2
+
J_h^𝒟(E) (v_h, v_h)
On each e, we control the first term in the previous inequality as
h^2 [(_h -)·∇ v_h] _0,e^2
≲
h^4 | |^2_W^1,∞(E∪ E') h^-1∇ v_h _0,E∪ E'^2
≲
h | |^2_W^1,∞(E∪ E') v_h _0,E∪ E'^2
≲
h | |^2_W^1,∞(E∪ E') v_h _0,E∪ E'^2 ,
where E and E' are the two elements sharing the edge e.
Combining (<ref>) with (<ref>) and (<ref>), we obtain
η__2 ≥ -
h2·∇0 v_h ^2_0,E
- C(
h | |^2_W^1,∞(𝒟(E)) v_h _0,𝒟(E)^2
+J_h^𝒟(E)(v_h, v_h)
) .
It remains to control η__3. Since _h∈_1(E), it holds ( _h ·∇0 v_h, (0 - I) w_h )_0,E= 0.
Hence we have
η__3 = ( ( - _h) ·∇0 v_h, (0 - I) w_h )_0,E
≳
- ( - _h) ·∇0 v_h _0,E _0,E
≳
- | |_W^1,∞(E) h ∇0 v_h _0,E h _h ·∇ v_h _0,𝒟(E)
≳
- | |_W^1,∞(E) v_h _0.𝒟(E)^2
Collecting (<ref>), (<ref>) and (<ref>), from (<ref>) we get
( ·∇0 v_h, 0 w_h )_0,E≥h2 ·∇0 u_h ^2_0,E
- C(
J_h^𝒟(E)(v_h, v_h)
+ h^1/2·∇0 v_h _0,E h^1/2 | |_W^1,∞ v_h _0,E
+ h | |^2_W^1,∞(𝒟(E)) v_h _0,𝒟(E)^2
+ | |_W^1,∞(E) v_h _0,𝒟(E)^2
) .
Returning to (<ref>), we have to control the boundary term.
We first notice that, due to Agmon's inequality and Poincaré's inequality, it holds
( I - 0) v_h _0,∂ E≲ h^1/2| ( I - 0) v_h |_1,E .
Together with an inverse inequality for the polynomial 0 w_h, the definition of w_h (cf. (<ref>)), and Lemma <ref>, we thus get:
( (·^E) (I - 0) v_h, 0 w_h )_0, ∂ E ≳
- ( I - 0) v_h _0,∂ E 0 w_h _0,∂ E
≳
- h^1/2 | ( I - 0) v_h |_1,E h^-1/20 w_h _0,E
≳
- | ( I - 0) v_h |_1,E w_h _0,E
≳
- J_h^E(v_h, v_h)^1/2 h^1/2_h ·∇ v_h _0,𝒟(E) .
Above, we have also used the estimate, see (<ref>):
h | ( I - 0) v_h |_1,E^2 ≲ J_h^E(v_h, v_h) .
From (<ref>), (<ref>) and (<ref>) we get
b_h^E(v_h, w_h) ≥h/2 ·∇0 u_h ^2_0,E
- C(
J_h^𝒟(E)(v_h, v_h)
+ h^1/2·∇0 v_h _0,E h^1/2 | |_W^1,∞ v_h _0,E
+ h | |^2_W^1,∞(𝒟(E)) v_h _0,𝒟(E)^2
+ | |_W^1,∞(E) v_h _0,𝒟(E)^2
+ J_h^E(v_h, v_h)^1/2 h^1/2_h ·∇ v_h _0,𝒟(E)) .
Finally, we need to control
- b_h^E(w_h, v_h), see (<ref>).
Integrating by parts, we obtain
- b_h^E(w_h, v_h)
= - ( ·∇0 w_h, 0 v_h )_0,E
-
( (·^E) (I - 0) w_h, 0 v_h )_0,∂ E
=
( ·∇0 v_h, 0 w_h )_0,E
-
( (·^E) w_h, 0 v_h )_0,∂ E
=
( ·∇0 v_h, 0 w_h )_0,E
-
( (·^E) w_h,
(0 - I) v_h )_0,∂ E
-
( (·^E) w_h,
v_h )_0,∂ E .
The first two terms are similar to the case
b_h(v_h, w_h).
The last one vanishes on the interior edges when we sum over all E∈Ω_h. Hence, we need to consider the elements E sharing with ∂Ω at least an edge. Using Cauchy-Schwarz inequality, trace inequality, inverse estimates and the continuity of π, we obtain on these boundary edges
- ( (·^E) w_h,
v_h )_0,∂ E ≥
- ‖ξ (ϵ, ) v_h ‖_0,Γ_E‖ξ (ϵ, ) w_h ‖_0,Γ_E
≳
-
‖ξ (ϵ, ) v_h ‖_0,Γ_E
h^-1/2‖ w_h ‖_0,E
≳
-
‖ξ (ϵ, ) v_h ‖_0,Γ_E
h^1/2_h ·∇ v_h _0,𝒟(E) .
Therefore, from (<ref>), (<ref>), (<ref>) and (<ref>) we get
(v_h, w_h)
≥h/2 ·∇0 u_h ^2_0,E - C(
J_h^𝒟(E)(v_h, v_h)
+ h^1/2·∇0 v_h _0,E h^1/2 | |_W^1,∞ v_h _0,E
+ h | |^2_W^1,∞(𝒟(E)) v_h _0,𝒟(E)^2
+ | |_W^1,∞(E) v_h _0,𝒟(E)^2
+ ( J_h^E(v_h, v_h)^1/2 +
‖ξ (ϵ, ) v_h ‖_0,Γ_E) h^1/2_h ·∇ v_h _0,𝒟(E)) .
We now consider the five local estimates (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>). From (<ref>), summing over all the elements E∈Ω_h, we obtain
(v_h, w_h)
≥h/2·∇ v_h _0,Ω^2
- C (
∑_E ∈Ω_h( ϵ^1/2∇ v_h _0,E
+
J_h^E(v_h, v_h)^1/2 + v_h _0,E + ‖ξ (ϵ, β) v_h ‖_0,Γ_E) h^1/2_h ·∇0 v_h _0,E
+ J_h(v_h, v_h) +
∑_E ∈Ω_h(h | |^2_W^1,∞(E)+ | |_W^1,∞(E)) v_h _0,E^2
+
∑_E ∈Ω_h h^1/2_h ·∇0 v_h _0,E h^1/2 | |_W^1,∞(E) v_h _0,E) .
Above, we have also used the property that, due to assumption (A1), summing over the elements each polygon is counted only a uniformly bounded number of times, even when the terms involve norms on 𝒟(E) or 𝒟(𝒟(E)).
We now notice that the triangular inequality, standard approximation results and an inverse estimate give
h^1/2_h ·∇0 v_h _0,E≲
h^1/2( ·∇0 v_h _0,E
+
| |_W^1,∞(E) v_h _0,E).
Hence, from (<ref>), using also Young's inequality (with suitable constants) for the first and the last summations in the right-hand side, we get
(v_h, w_h)
≥ C_1 h ·∇ v_h _0,Ω^2 - C_2 ( ϵ∇ v_h _0^2 + J_h(v_h, v_h) + v_h _0^2
+ ‖ξ (ϵ, β) v_h ‖^2_0,Γ) .
From Lemma <ref>, we now obtain
(v_h, w_h)
≥
C_1 h ·∇ v_h ^2_0,Ω
-C_2 (v_h, v_h) .
With Lemmas <ref> and <ref> at our disposal, the inf-sup condition easily follows.
Under assumptions (A1), it holds:
v_h _≲sup_z_h ∈ V_h(Ω_h) (v_h, z_h) z_h _ for all v_h ∈ V_h(Ω_h).
We split the proof into two cases.
First case. We first consider ε < h. Given v_h ∈ V_h(Ω_h), we take z_h =w_h + κ v_h, where w_h is defined as in Lemma <ref>. From Lemmas <ref> and <ref>, for κ sufficiently large we have
(v_h, z_h) =
(v_h, w_h + κ v_h)
≳ v_h _^2 .
In order to conclude the proof of the inf-sup condition, we have to prove the estimate
w_h _≲ v_h _ ,
which obviously implies z_h _≲ v_h _. Recalling the norm definition (<ref>)-(<ref>) and that w_h :=, the above continuity estimate follows from Lemma <ref>, estimate (<ref>) and observing that
h ·∇0 w_h _0,E^2
≲
h^-10 w_h _0,E^2
≲
h _h ·∇0 v_h _0,𝒟(E)^2 ,
and
w_h _0,Γ_E^2 = _0,Γ_E^2
≲ h π( _h·∇Π^0_k v_h ) _0,E^2 +
h^3 | π(_h·∇Π^0_k v_h) |_1,E^2
≲
h π( _h·∇Π^0_k v_h) _0,E^2
≲ h _h·∇Π^0_k v_h _0,𝒟(E)^2
.
The above bounds (<ref>) and (<ref>) are to be combined with (<ref>).
Second case.
We now consider the case ε≥ h.
In such case the proof simply follows from Lemma <ref> and the observation that
h ·∇0 u_h ^2_0,E≲ε∇0 u_h ^2_0,E≲ε∇ u_h ^2_0,E ,
which allows to control also convection with (v_h, v_h).
§.§ Error estimates
We begin our error analysis, which follows the steps of <cit.>, with the following result.
Let u ∈ V and u_h ∈ V_h(Ω_h) be the solutions of problem (<ref>) and problem (<ref>), respectively. Furthermore, let us define
u - ,
where ∈ V_h(Ω_h)
is the interpolant function of u defined in Lemma <ref>.
Then under assumption (A1), it holds that
u - u_h _≲_
+
∑_E ∈Ω_h(
+ + + + + ) ,
where (cf. Section <ref>)
ℱ̃^E - ℱ_h^E _ ,
ϵ a^E(u, ·) - a_h^E(, ·) _ ,
(u, ·) - (, ·) _ ,
c^E(u, ·) - c_h^E(, ·) _ ,
𝒩_h^E(u, ·) - 𝒩_h^E(, ·) _ ,
J̃_h^E(u , ·) - J_h^E(, ·)_ = J_h^E(, ·)_ ,
where ·_ is the dual norm of ·_.
We first introduce the following quantities
u - Π^∇_k u ,
e_h u_h - .
Using triangular inequality, we have that
u - u_h _≤ u - u_ℐ_
+ u_ℐ - u_h _
=
e_ℐ_
+ e_h _ .
Thanks to the inf-sup condition, and recalling that u satisfies (<ref>), we have that
e_h _ ≲sup_v_h ∈ V_h(Ω_h)(e_h, v_h) v_h _
=
sup_v_h ∈ V_h(Ω_h)(u_h - u_ℐ, v_h) v_h _
=
sup_v_h ∈ V_h(Ω_h)ℱ_h(v_h) - (u_ℐ, v_h) v_h _
=
sup_v_h ∈ V_h(Ω_h)ℱ_h(v_h) - ℱ̃(v_h) + (u, v_h) - 𝒜_(u_ℐ, v_h) v_h _
=
sup_v_h ∈ V_h(Ω_h)∑_E ∈Ω_h( ℱ_h^E(v_h) - ℱ̃^E(v_h) + (u, v_h) - (u_ℐ, v_h) ) v_h _ .
Estimate (<ref>) easily follows by recalling the definitions of and , see (<ref>) and (<ref>)-(<ref>).
To properly bound all the terms in Proposition <ref> we make the following assumptions:
(A2) Data assumption. The solution u, the advective field and the load f in (<ref>) satisfy:
u ∈ H^+1(Ω_h) , f ∈ H^+1(Ω_h) , ∈ [W^+1_∞(Ω_h)]^2 ,
for some 0 < ≤ k.
Under assumptions (A1) and (A2),
the term ^2_ can be bounded as follows (for 0 < s ≤ k)
^2_,E≲ϵ h^2 | u |^2_+1,E + h^2+1 | u |^2_+1,E .
By definition of ·_, we have that
^2_,E
=
ϵ∇^2_0,E
+ h ·∇0 _0,E^2
+ σ_0,E^2
+ ξ ( ϵ, ) e_ℐ^2_Γ_E
+ J_h^E(e_ℐ, e_ℐ) .
Using lemma <ref>, we have that
ϵ∇^2_0,E
+ h ·∇0 _0,E^2
≲
(ϵ + h) ∇^2_0,E≲
(ϵ + h) h^2 | u |^2_+1,E ,
and
^2_0,E≲ h^2+2 | u |_+1,E^2 .
For the Nitsche term we have that
ξ ( ϵ, ) e_ℐ^2_Γ_E = ϵδ h⟨ e_ℐ, e_ℐ⟩_Γ_E + ⟨ | ·^E | e_ℐ, e_ℐ⟩_Γ_E .
Using trace inequality and interpolation estimate, we obtain
ϵδ h⟨ e_ℐ, e_ℐ⟩_Γ_E≲ϵδ h^2 e_ℐ_0,E^2
+
ϵδ | e_ℐ |_1,E^2
≲ϵ h^2s |u|_s+1,E^2 ,
and
⟨ | ·^E | e_ℐ, e_ℐ⟩_Γ_E≲ h^-1 e_ℐ_0,E^2
≲ h^2s+1 |u|_s+1,E^2 .
It remains to control the jump operator. We have
J_h^E(e_ℐ,e_ℐ)
=
γ2∑_e ⊂∂ E∫_e h_e^2 [∇ e_ℐ] · [∇ e_ℐ] ds
+
γ h_E _J ( (I - ) e_ℐ, (I - ) e_ℐ)
≲
h^2 ( h^-1∇ e_ℐ_0,𝒟(E)^2
+
h |∇ e_ℐ|_1,𝒟(E)^2)
+
h | (I - ) |^2_1,E
≲
h^2 (h^-1∇ e_ℐ_0,𝒟(E)^2 )
+
h | (I - ) |^2_1,E
≲
h ||^2_1,𝒟(E)≲ h^2+1 | u |_+1,𝒟(E)^2 .
Under the assumptions (A1) and (A2),
the term can be bounded as follows (for 0 < s ≤ k)
η_ℱ^E
≲
h^+1 | f |_+1,E .
It is sufficient to follow the same procedure of <cit.>. Using the orthogonality of 0, Cauchy-Schwarz inequality, Poincaré inequality and Lemma <ref>, we obtain
η_ℱ^E
=
ℱ̃^E(v_h) - ℱ^E_h(v_h)
=
(f, v_h - 0 v_h )_0, E
=
( (I-0) f, (I - 0) v_h )_0, E
≤ (I-0) f _0,E (I - 0) v_h _0, E
≲ (I-0) f _0,E v_h _0, E
≲
h^+1 | f |_+1,E v_h _,E .
Under the assumptions (A1) and (A2),
the term can be bounded as follows (for 0 < s ≤ k)
≲ϵ^1/2 h^ | u |_+1,𝒟(E) .
This result is proved following the line of Lemma 5.3 of <cit.>.
Adding and subtracting u, using Cauchy-Schwarz inequality, we obtain
=
ϵ ã^E_h(u, v_h) - ϵ a_h^E(u_ℐ, v_h)
=
ϵ ã^E_h(u - u,v_h) + ϵ a_h^E( u - u_ℐ,v_h)
≤ϵ (∇ e_π_0,E + (1 + α^*) ∇ ( u - u_ℐ) _0,E) ∇ v_h _0,E
≲ϵ (∇ e_π_0,E + ∇ e_ℐ_0,E) ∇ v_h _0,E≲ϵ^1/2 h ^ | u |_+1,E v_h _,E .
Under the assumptions (A1) and (A2),
the term can be bounded as follows (for 0 < s ≤ k)
≲
h^ + 1/2 u _+1
+
_[W^+1,∞]^2 h^+1 u _+2,E
+
∫_∂ E (· ^E) e_ℐ v_h ds .
Recalling the definition, we need to estimate
( ·∇ u, v_h )_0,E
-
( ·∇0 u_ℐ, 0 v_h )_0,E
-
∫_∂ E (· ^E) (I - 0) u_ℐ 0 v_h ds ,
( 0 u_ℐ, ·∇0 v_h )_0, E
-
( u, ·∇ v_h )_0,E
+
∫_∂ E (· ^E) (I - 0) v_h 0 u_ℐ ds .
By integration by parts, we have
η_b,A^E
=
( ·∇ u, (I - 0) v_h )_0,E
+
( ·∇ (u - 0 u_ℐ), 0 v_h )_0,E
-
∫_∂ E (·^E) (I - 0) u_ℐ 0 v_h ds
=
( ·∇ u, (I - 0) v_h )_0,E
-
( u - 0 u_ℐ, ·∇0 v_h )_0,E
+
∫_∂ E (·^E) (u - u_ℐ) 0 v_h ds
=
( (I - 0) ·∇ u, (I - 0) v_h )_0,E
+
( 0 u_ℐ - u, ·∇0 v_h )_0,E
+
∫_∂ E (·^E) e_ℐ 0 v_h ds
η_b,1^E + η_b,2^E + η_b,3^E ,
and
η_b,B^E
=
( 0 u_ℐ - u, ·∇0 v_h )_0,E
-
( u, ·∇ (I - 0) v_h )_0,E
+
∫_∂ E (·^E) (I - 0) v_h 0 u_ℐ ds
=
( 0 u_ℐ - u, ·∇0 v_h )_0,E
+
( ·∇ u, (I - 0) v_h )_0,E
+
∫_∂ E (·^E) (I - 0) v_h (0 u_ℐ - u) d s
=
(0 u_ℐ - u, ·∇0 v_h)_0,E
+
((I - 0)·∇ u,(I - 0)v_h)_0,E
+
∫_∂ E (·^E) (I - 0) v_h (0 u_ℐ - u) d s
η_b,2^E + η_b,1^E + η_b,4^E .
yielding the following expression for
η_b^E
2 η_b^E = 2η_b,1^E + 2η_b,2^E + η_b,3^E + η_b,4^E .
We now analyze each term in the sum above.
∙ η_b,1^E: using Cauchy-Schwarz, the continuity in 0 in L^2 and standard estimates, we obtain
η_b,1^E
=
( (I - 0) ·∇ u, (I - 0) v_h )_0,E
≤ (I-0) ·∇ u_0,E v_h _0,E
≤ (I - 0) ·∇ u _0,E v_h _,E
≲
h^+1|·∇ u |_+1,E v_h _,E
≲
h^+1 u _+1,E β_[W^+1_∞(E)]^2 v_h _,E .
∙ η_b,2^E: we have that
η_b,2^E
=
( 0 u_ℐ - u, ·∇0 v_h )_0,E
≤0 u_ℐ - u _0,E ·∇0 v_h _0,E
≤( (I - 0)u _0,E + e_ℐ_0,E)
·∇0 v_h _0,E
≲ h^ + 1/2 u _+1 v_h _,E .
∙ η^E_b,3 + η^E_b,4: we use a scaled trace inequality making use of the scaled norm
∀ v ∈ H^1(E), v _1,E^2 v _L^2(E)^2 + h^2_E | v |^2_H^1(E) .
We obtain
η^E_b,3 + η^E_b,4 =
∫_∂ E (·^E) e_ℐ 0 v_h d s
+
∫_∂ E (·^E) (I - 0) v_h (0 u_ℐ - u) d s
=
∫_∂ E (·^E) (0 - I) v_h (e_ℐ + u - 0 u_ℐ) d s
+
∫_∂ E (·^E) e_ℐ v_h d s
≲ (
e_ℐ_L^2(∂ E)
+
u - 0 u_ℐ_L^2(∂ E)
) (I - 0)v_h_L^2(∂ E)
+
∫_∂ E (·^E) e_ℐ v_h d s
≲ h_E^-1 (
e_ℐ_1,E
+
u - 0 u_ℐ_1,E
) (I - 0)v_h_0,E
+
∫_∂ E (·^E) e_ℐ v_h d s
≲
h^-1/2 (
e_ℐ_1,E
+
u - 0 u_ℐ_1,E
)
h^1/2∇(I - )v_h_0,E
+
∫_∂ E (·^E) e_ℐ v_h d s
≲ h^+1/2| u |_+1,E v_h _,E + ∫_∂ E (·^E) e_ℐ v_h d s ,
where in the last step we used the J_h(v_h,v_h) term in the definition of v_h _,E.
The thesis now follows gathering the last three inequalities in (<ref>).
Under the assumptions (A1) and (A2),
the term can be bounded as follows (for 0 < s ≤ k)
≲
h^ + 1 | u |_+1,E
Similarly to Lemma <ref>, we have that
=
c̃^E_h(u,v_h) - c_h^E(u_ℐ,v_h)
=
c̃^E_h(u - 0 u,v_h) + c_h^E(0 u - u_ℐ,v_h)
≤
( e_π_0,E + (1 + α^*) 0 u - u_ℐ_0,E) v_h _0,E
≲
( e_π_0,E + e_ℐ_0,E) v_h _0,E≲
h^ + 1 | u |_+1,E v_h _,E .
Under the assumptions (A1) and (A2),
the term can be bounded as follows (for 0 < s ≤ k)
≲
(ϵ^1/2 h^s + h^s+1/2 )| u |_s+1,E .
By definition of the two bilinear forms, we have that
=
- ϵ⟨∇ u · , v_h ⟩_Γ_E + ϵ⟨∇ u_ℐ· , v_h ⟩_Γ_E
+
ϵδ h_E⟨ u, v_h ⟩_Γ_E
- ϵδ h_E⟨ u_ℐ, v_h ⟩_Γ_E
- 12⟨ |·| u, v_h ⟩_Γ_E
+ 12⟨ |·| u_ℐ, v_h ⟩_Γ_E
+ + .
Now, we estimate each of the three terms. Using trace inequality, the first returns
=
- ϵ⟨∇ u · , v_h ⟩_Γ_E + ϵ⟨∇ u_ℐ· , v_h ⟩_Γ_E
=
- ϵ⟨∇ (u - ∇ u_ℐ) · , v_h ⟩_Γ_E
≲ϵ ( h^-1/2‖∇ u - ∇ u_ℐ‖_0,E + h^1/2 | ∇ u - ∇ u_ℐ |_1,E ) ‖ v_h ‖_Γ_E
≲ϵ^1/2 (‖∇ u - ∇ u_ℐ‖_0,E + h | ∇ u - ∇ u_ℐ |_1,E ) ‖ v_h ‖_cip,E .
Adding and subtracting u, using triangular inequality and Lemma <ref>, we obtain
≲ϵ^1/2 h^s | u |_s+1,E‖ v_h ‖_cip,E .
For the second term, using trace inequality and interpolation estimate, we have that
=
ϵδ h⟨ u, v_h ⟩_Γ_E - ϵδ h_E⟨ u_ℐ, v_h ⟩_Γ_E
≲ϵδ h‖ u - u_ℐ‖_Γ_E‖‖ v_h ‖_Γ_E≲( ϵδ h)^1/2‖ u - u_ℐ‖_Γ_E‖‖ v_h ‖_,E
≲ϵ^1/2 h^s | u |_s+1,E‖ v_h ‖_cip,E .
Finally, the last one is treated in a very similar way with respect to the previous one, it gives
=
-12⟨ |·| u, v_h ⟩_Γ_E - 12⟨ |·| u_ℐ, v_h ⟩_Γ_E
≲
h^-1/2‖ u - u_ℐ‖_0,E‖ v_h ‖_,E
≲ h^s+1/2 | u |_s+1,E‖ v_h ‖_,E .
Under the assumptions (A1) and (A2),
the term can be bounded as follows (for 0 < s ≤ k)
≲
h^s + 1/2 | u |_s+1,E .
Using Cauchy-Schwarz inequality, we have that
J^E_h(u_ℐ, v_h)
≤
J^E_h(u_ℐ, u_ℐ)^1/2 J^E_h(v_h, v_h)^1/2
≤
J^E_h(u_ℐ, u_ℐ)^1/2 v_h _,E .
Since the solution u is sufficiently smooth, we have that
J^E_h(u_ℐ, v_h)
=
∑_e ⊂∂ E∫_eγ2 h^2_e [∇ u_ℐ]·[∇ u_ℐ]
+
h_E γ 𝒮_j^E( ( I - ) u_ℐ,( I - ) u_ℐ)
=
∑_e ⊂∂ E∫_eγ2 h^2_e [∇ ( u_ℐ - u)]^2
+
h_E γ 𝒮_j^E( ( I - ) u_ℐ,( I - ) v_h)
≲∑_E' ∈𝒟(E) h^2 ∇ ( u_ℐ - u) _0,∂ E'^2
+ h | ( I - ) u_ℐ |_1,E^2 .
Using trace inequality, we obtain for the first term
∇ u_ℐ - ∇ u _0,∂ E' ≲( h^-1 ∇ u_ℐ - ∇ u ^2_0,E' + h |∇ u_ℐ - ∇ u |^2_1,E')^1/2 .
Adding and subtracting ∇0 u, using Lemma <ref> and interpolation estimate, we obtain
h^-1/2∇0 u_ℐ - ∇ u_0,E' ≲
h^-1/2∇0 u - ∇ u _0,E' + h^-1/2∇0 u_ℐ - ∇0 u _0,E'
≲
h^ - 1/2| u |_ + 1 ,E' ,
and similarly, we have that
h^1/2|∇0 u_ℐ - ∇ u|_1,E' ≲
h^ - 1/2| u |_ + 1 ,E' .
Using Lemma <ref>, we have that
h^1/2 | ( I - ) u_ℐ |_1,E ≲
h^1/2 ( ∇_0,E + ∇_0,E )
≲
h^ + 1/2 | u |_ + 1, E .
We conclude
J^E_h(u_ℐ, v_h)
≲
h^s + 1/2 | u |_s+1,𝒟(E) v_h _, 𝒟(E) .
We thus have the following proposition.
Under the assumptions (A1) and (A2), let u ∈ V be the
solution of equation (<ref>) and u_h ∈ V_h(Ω_h) be the solution of equation (<ref>).
Then it holds that
u - u_h^2_≲∑_E ∈Ω_hΘ^E
(
ϵ h^2
+
h^2+1
) ,
where the constant Θ^E depends on
u_s+2,E, f_s+1,E, β_[W^s+1_∞(E)]^2.
It it sufficient to use Proposition <ref> combined with Lemmas <ref>, <ref>, <ref>, <ref>, <ref>, <ref> <ref>, noting that
∑_E ∈Ω_h∫_∂ E∖∂Ω (·^E) e_ℐ v_h d s = 0 ,
and the contributions stemming from ∂Ω are controlled as in (<ref>).
§.§ A special case: advection-diffusion problem with ∈_1(Ω)
We consider problem (<ref>) in a particular situation: we assume an advection term ∈_1(Ω), i.e. globally linear, and we allow the reaction coefficient σ = 0. We do not make further assumptions on the diffusion coefficient ϵ and on the load term f. Thus, the advection-diffusion problem reads as (cf. (<ref>))
{ find u ∈ V such that:
ϵ a(u, v) + (u, v) = ℱ̃(v) for all v ∈ V..
In this case, even without the reaction term, we are able to prove robust estimates for the approximation of problem (<ref>).
Using the same approach as before, the discrete version of problem (<ref>) reads as
{ find u_h ∈ V_h(Ω_h) such that:
^ ad(u_h, v_h) = ℱ_h(v_h) for all v_h ∈ V_h(Ω_h),.
where
^ ad(u_h, v_h) ∑_E ∈Ω_h^ ad,E (u_h, v_h) ,
and
^ ad,E (u_h, v_h) ϵ a_h^E(u_h , v_h) + (u_h , v_h) + 𝒩_h(u_h, v_h) + J_h^E(u_h , v_h) .
The key observation is that a suitable inf-sup condition still holds true without the help of the L^2-type norm stemming from the reaction term. In fact, introducing the local norm
v_h^2_, ad, E
:=
ϵ ∇ v_h ^2_0,E +
h ·0P ∇ v_h ^2_0,E +
‖ξ (ϵ, ) v_h ‖^2_0,Γ_E +
J_h^E(v_h,v_h) ,
and its global counterpart
v_h^2_, E∑_E ∈Ω_hv_h^2_, ad, E ,
similarly to Proposition <ref>, we have the following result.
Under assumptions (A1), it holds that
v_h _, ad≲sup_z_h ∈ V_h(Ω_h)^ ad (v_h, z_h) z_h _ , ad for all v_h ∈ V_h(Ω_h),
for a constant that does not depend on h and ϵ.
The proof of (<ref>) is analogous to the one of Proposition <ref>, with the simplification that in Lemma <ref> it holds _h=. The main difference stands in the treatment of η__1, η__2 and η__3 in (𝐓_5), see (<ref>). These terms are the only ones requiring the help of the L^2 norm in the general case (apart, of course, the reaction term itself).
Regarding the term η__1, in our present case we immediately have the advective norm:
η__1 =
h ·∇0 v_h ^2_0,E
Since ∈_1(Ω), it follows that ·∇0 v_h∈_k(E), so that we can directly bound η__2 using Young's inequality and Proposition <ref>:
η__2 = h ( ·∇0 v_h, ( π - I ) (·∇0 v_h) )_0,E
≥ - h2·∇0 v_h ^2_0,E -
h2 (π - I) (·∇0 v_h) ^2_0,E
≥ - h2·∇0 v_h ^2_0,E - C J_h^𝒟(E)(v_w,v_h) ,
which is the counterpart of (<ref>).
Furthermore, again since ·∇0 v_h∈_k(E), it follows that
η__3 := (·∇0 v_h, (0 - I) w_h)_0,E = 0 .
Once the above stability result has been established, the next error estimate can be proved using the same arguments of Proposition <ref>.
The only difference is handling the terms η_ℱ^E and η_b,1^E which now must be bounded using diffusion (since reaction is not available) and therefore paying a price in terms of ε but with a better rate in terms of h. For the sake of conciseness we here omit the simple alternative derivations for such terms.
Under the assumptions (A1) and (A2), let u ∈ V be the
solution of equation (<ref>) and u_h ∈ V_h(Ω_h) be the solution of equation (<ref>).
Then it holds that
u - u_h^2_, ad ≲∑_E ∈Ω_hΘ^E
(
ϵ h^2
+
h^2+1
+
h^2( + 2)ϵ) ,
where the constant Θ^E depends on
u_s+1,E, f_s+1,E, β_[W^s+1_∞(E)]^2/β_E.
§ NUMERICAL EXPERIMENT
In this section, we investigate the actual computational behavior of the proposed method.
Model problem.
We consider a family of problems in the unit square Ω=(0, 1)^2. We choose the boundary conditions and the source term (which turns out to depend on ϵ, σ and ) in such a way that the analytical solution is always the function
u(x, y) := sin(π x)sin(π y) .
Different choices of the parameters σ, ϵ and of the advective term (x,y) will be selected. Since the pointwise values of the numerical solution u_h are unknown, the following error quantities will be considered:
* H^1-seminorm error
e_H^1 := √(∑_E∈𝒯_h∇(u-Π_k^∇ u_h)^2_0,E) ;
* L^2-norm error
e_L^2 := √(∑_E∈𝒯_h(u-0 u_h)^2_0,E) .
We will consider two different families of mesh:
* a mesh composed by highly distorted quadrilaterals obtained perturbing a mesh composed of structured squares;
* a centroidal Voronoi tessellation of the unit square.
These two families are represented in Figure <ref>.
Effects of the CIP stabilization. The first aspect we investigate is the benefits of inserting the CIP term in the variational formulation of the problem. We thus consider an advection-dominated regime and choose the parameters ϵ = 1e-9, σ = 0, along with a constant advection term
(x, y) := [[ 1; 0.5 ]] .
We consider a centroidal Voronoi tesselation of the domain Ω into 256 polygons.
The degree of the method is set to k=1.
In Figure <ref> we observe that by inserting the bilinear form J_h(·,·) in the variational formulation, we are able to accurately approximate the analytic solution u(x,y) of the model problem.
If we omit the CIP term, we obtain (as expected) a definitely unsatisfactory numerical solution, which exhibits nonphysical oscillations all over the computational domain. We also remark that these instabilities reach peaks of the order of 10^2, despite for the analytic solution we have u _L^∞(Ω)=1.
Convergence analysis We now investigate the convergence of the numerical method by means of the above-introduced norms, and choosing different consistency order, i.e. k=1,2,3.
The convective term is
(x, y) := [[ 1; 0.5 ]] .
We consider a diffusion-dominated case (ϵ = 1), and an advection-dominated one (ϵ = 1e-9).
Thus, we are in the framework of Section <ref>. Accordingly, we neglet the reaction term (hence, σ = 0) and the theoretical error bound of Proposition <ref> holds.
We compare the method with and without the jump term J_h(·,·). The results are obtained using the Voronoi mesh family.
In Figure <ref>, we observe that in the case ϵ = 1 the two methods behave in the same way. Instead, in the advection-dominated regime we observe that the optimal convergences are attained when inserting the stabilising jump term; without it, as expected, the method displays unsatisfactory results, especially for the low-order case.
Effect of the reaction term. We now consider an advection-dominated problem with a variable advection term not in _1(Ω).
In particular, we select
(x, y) := [[ -2 π sin(π (x+2 y)); π sin(π (x+2 y)) ]] .
We recall that for this case we are able to prove robust error bounds only with the aid of the reaction term, see Proposition <ref>.
The diffusive coefficient is set to ϵ = 1e-9. We consider two different families of mesh. The first one is made by the usual Voronoi polygons and the second one is composed of distorted squares.
We select two different values for the reaction term: σ = 1 and σ = 0.
Figure <ref> shows that there is no significant difference between the cases σ=1 and σ =0. As already mentioned, for this latter case Proposition <ref> does not apply, and no satisfactory theoretical analysis is available, yet. However, the numerical outcomes seems to suggest that it could be possible to drop the reaction term even if the advection term is not globally linear. We note also that we achieve a good convergence also in the case that the mesh is composed of unstructured quadrilaterals.
tocsection
plain
§.§ CIP stabilizing form
Io e Carlo siamo rimasti d'accordo per smembrare questa sezione e ridefinirla nella sezione sulla discretizzazione. Dovremmo quindi rimuoverla, al moment l'ho copia incollata nei vari pezzi nelle session successive We start observing that the bilinear forms
a(·,·) ,
(·,·)
and
c(·,·)
can be decomposed into local contributions
a(u, v) ∑_E ∈Ω_h a^E(u, v) ,
(u, v) ∑_E ∈Ω_h(u, v) ,
c(u, v) ∑_E ∈Ω_h c^E(u, v) .
It is also possible to consider a decomposition of the bilinear form 𝒩(·,·)
𝒩(u, v) ∑𝒩^E(u, v) ,
where the sum is taken over all the elements E ∈Ω_h such that ∂ E∩Γ has positive measure.
Following <cit.>, we introduce the local CIP-stabilization form
J̃_h^E H^q(E) × H^q(E)→, with q > 3/2,
defined as
J̃_h^E(u, v)
12∑_e ⊂∂ E∫_e γ h_e^2 [∇ u] · [∇ v] ds
=
12∑_e ⊂∂ E∫_e γ h_e^2 [∇ u ·^e] [∇ v ·^e] ds ,
where [∇ u] denotes the jump of ∇ u across e, γ is a parameter that on each edge is defined as
γ (e)·^e _L^∞(e) ,
and ^e is one of the two outward normal vectors to e.
If e is a boundary edge we set [∇ u ] = 0. Since we are in the setting _[L^∞(Ω)]^2 = 1
We define the local bilinear form
(·,·) for sufficiently smooth functions
as
(u, v)
ϵ a^E(u, v)
+ (u, v)
+ σ c^E(u, v)
+𝒩^E (u,v)
+ J̃_h^E(u, v)
for all u, v ∈ V.
The global bilinear form
(·, ·)
is obtained by summing the contributions from all the polygons
(u,v)
∑_E ∈Ω_h (u, v) .
Io a questo punto rimuoverei questa parte e nel momento di fare l'analisi dell'errore scriverei che se u è regolare allora J(u,v) è zero... Then, the problem we consider is the following:
{ find u ∈ V s.t.
(u, v) = ℱ̃(v) for all v ∈ V..
If
u ∈ H^2(Ω)
we have that ∇ u is well defined for every edge e.
Than, for every polygon E and for all v ∈ V
J̃_h^E(u,v) = 0 . ] |
http://arxiv.org/abs/2307.06053v1 | 20230712101418 | A hybrid Krasnosel'skiĭ-Schauder fixed point theorem for systems | [
"Gennaro Infante",
"Giovanni Mascali",
"Jorge Rodríguez-López"
] | math.CA | [
"math.CA",
"math.FA",
"Primary 47H10, secondary 45G15, 34B18"
] |
G. Infante]Gennaro Infante
Gennaro Infante, Dipartimento di Matematica e Informatica, Università della
Calabria, 87036 Arcavacata di Rende, Cosenza, Italy
[email protected]
G. Mascali]Giovanni Mascali
Giovanni Mascali, Dipartimento di Matematica e Informatica, Università della
Calabria, 87036 Arcavacata di Rende, Cosenza, Italy
[email protected]
J. Rodríguez–López]Jorge Rodríguez–López
Jorge Rodríguez–López, CITMAga & Departamento de Estatística, Análise Matemática e Optimización, Universidade de Santiago de Compostela, 15782, Facultade de Matemáticas, Campus Vida, Santiago, Spain
[email protected]
We provide new results regarding the localization of the solutions of nonlinear operator systems. We make use of a combination of Krasnosel'skiĭ cone compression-expansion type methodologies and Schauder-type ones. In particular we establish a localization of the solution of the system within the product of a conical shell and of a closed convex set. By iterating this procedure we prove the existence of multiple solutions. We illustrate our theoretical results by applying them to the solvability of systems of Hammerstein integral equations. In the case of two specific boundary value problems and with given nonlinearities, we are also able to obtain a numerical solution, consistent with our theoretical results.
[2020]Primary 47H10, secondary 45G15, 34B18
Asymmetry of 2-step Transit Probabilities in 2-Coloured Regular Graphs
[
======================================================================
§ INTRODUCTION
The Krasnosel'skiĭ compression-expansion fixed point theorem in cones and the Schauder fixed point theorem in normed spaces are among the most well-known and widely employed topological fixed point theorems in the literature. Both results have been extensively applied in order to obtain existence and localization of solutions for a huge variety of boundary value problems and integral equations.
Our purpose is to present a novel fixed point theorem for operator systems of the form
{[ u_1=T_1(u_1,u_2),; u_2=T_2(u_1,u_2), ].
which combines, in a component-wise manner, the assumptions of the Krasnosel'skiĭ cone fixed point theorem with those in the Schauder theorem. Roughly speaking, we assume that for each fixed u_2, the operator T_1(·,u_2) satisfies compression-expansion type conditions in the line of Krasnosel'skiĭ's result and, moreover, for each fixed u_1, the operator T_2(u_1,·) fulfills the hypotheses of the Schauder fixed point theorem (see Theorem <ref> below). As a result, we obtain a solution (u̅_1,u̅_2) of the system (<ref>) with the first component, u̅_1, localized in a conical shell and its second one, u̅_2, in a certain closed convex set. By an iteration of this methodology we show that it is also possible to prove the existence of multiple solutions.
We emphasize that the localization of solutions of differential systems (or more in general of nonlinear operator equations) plays a key role in various settings, for example when modelling biological or medical phenomena <cit.>.
The proof relies on an application of the classical Leray-Schauder fixed point index, see <cit.>.
Our abstract theory was inspired by previous results for operator systems due to Precup <cit.>, where the author established the vector version of Krasnosel'skiĭ fixed point theorem, so-called because compression-expansion type conditions are imposed independently in each component of the system (<ref>). An alternative approach to this fixed point theorem based on fixed point index techniques has been recently presented in <cit.> and complementary results in this line can also be found in <cit.>.
Finally, to show that this hybrid fixed point theorem is a useful tool for the study of systems of nonlinear differential and integral equations, we apply it to systems of Hammerstein integral equations. As a consequence, we obtain existence and localization results for second-order systems subject to Sturm-Liouville type boundary conditions. We illustrate, in the case of two specific examples, the constants that occur in our theoretical results and, furthermore, we provide numerical approximations of the solutions that are consistent with our theory.
§ FIXED POINT THEOREM FOR OPERATOR SYSTEMS
In the sequel, we need the following notions. A closed convex subset K of a normed linear space (X,·) is a cone if λ u∈ K for every u∈ K and for all λ≥ 0, and K∩ (-K)={0}.
A cone K induces the partial order in X given by u≼ v if and only if v-u∈ K. Moreover, we shall say that u≺ v if v-u∈ K∖{0}. The cone K is called normal if there exists d>0 such that u≤ dv for all u,v∈ X with 0≼ u≼ v.
The following notations will be useful: for given r,R∈ℝ_+:=[0,∞), 0<r<R, we define
K_r,R:={u∈ K:r<u<R } and K_r,R:={u∈ K:r≤u≤ R }.
Now, we recall Krasnosel'skiĭ fixed point theorem <cit.> (see also <cit.>).
Let (X,·) be a Banach space, K a cone in X and r,R∈_+, 0<r<R.
Consider a compact map T:K_r,R→ K (i.e., T is continuous and T(K_r,R ) is relatively compact) satisfying one of the following conditions:
(a) T(u)⊀ u if u=r and T(u)⊁ u if u=R;
(b) T(u)⊁ u if u=r and T(u)⊀ u if u=R.
Then T has at least a fixed point u∈ K with r≤u≤ R.
It is commonly said that the operator T is compressive if condition (a) in Krasnosel'skiĭ fixed point theorem holds and, expansive in case it satisfies (b).
It is also well-known that conditions (a) and (b) can be weakened as homotopy type conditions and the result can be stated for general open subsets of the cone (not necessarily balls). In this way, we have the homotopy version of Krasnosel'skiĭ theorem or Krasnosel'skiĭ-Benjamin theorem, see for instance <cit.> and cf. <cit.>.
Let (X,·) be a normed linear space, K a cone in X and U and V bounded and relatively open subsets of K such that 0∈ V⊂V⊂ U.
Assume that T:U∖ V→ K is a compact map and there exists h∈ K∖{0} such that one of the following conditions is satisfied:
(a) T(u)+μ h≠ u if u∈∂ V and μ> 0, and T(u)≠λ u if u∈∂ U and λ> 1;
(b) T(u)≠λ u if u∈∂ V and λ> 1, and T(u)+μ h≠ u if u∈∂ U and μ> 0.
Then T has at least a fixed point u∈U∖ V.
For the sake of completeness, let us recall also the Schauder fixed point theorem, see for example <cit.>.
Let D be a non-empty, closed and convex subset of a normed space (X,·) and T:D→ D be a compact map. Then T has at least one fixed point in D.
Let (X,·_X) and (Y,·_Y) be normed linear spaces and consider the cartesian product X× Y. When no confusion may occur, both norms ·_X and ·_Y will be simply denoted by ·.
Our aim is to present a fixed point theorem for systems with Krasnosel'skiĭ-type conditions in one component and Schauder-type conditions in the other one.
Let U and V be bounded and relatively open subsets of a cone K_1 of the normed space X such that 0∈ V⊂V⊂ U and D be a closed convex subset of the normed space Y.
Assume that T=(T_1,T_2):(U∖ V)× D→ K_1× D is a compact map and there exists h∈ K_1∖{0} such that either of the following conditions holds in (U∖ V)× D:
(a) T_1(u)+μ h≠ u_1 if u_1∈∂ V and μ> 0, and T_1(u)≠λ u_1 if u_1∈∂ U and λ>1; or
(b) T_1(u)≠λ u_1 if u_1∈∂ V and λ> 1, and T_1(u)+μ h≠ u_1 if u_1∈∂ U and μ> 0.
Then T has at least a fixed point u=(u_1,u_2)∈ K_1× D with u_1∈U∖ V.
Proof.
First of all, by the Dugundji extension Theorem (see <cit.>), there exists a continuous map N=(N_1,N_2):K_1× D→ K_1× D such that N(u)=T(u) for all u∈(U∖ V)× D and, moreover, N(K_1× D )⊂ co T( (U∖ V)× D ), where co A denotes the convex hull of A, i.e., the smallest convex set containing A. According to Mazur's theorem, the set N(K_1× D ) is relatively compact since it is a subset of the closed convex hull of the relatively compact set T( (U∖ V)× D ).
Since the set C:=K_1× D is a closed convex subset of the normed space X× Y and N is a compact map, the fixed point index of N with respect to C is well-defined for any (relative) open subset 𝒪⊂ C such that N is fixed point free on the boundary of 𝒪, see <cit.>. It will be denoted by i_C(N,𝒪).
Let us suppose that the operator T is fixed point free on the boundary of the set (U∖ V)× D (otherwise the proof is finished), which implies that so is the operator N.
Assume now that T satisfies condition (a). Let us consider the relative open sets
𝒰:=U× D and 𝒱:=V× D.
We shall prove that i_C(N,𝒰)=1 and i_C(N,𝒱)=0. First, observe that ∂ 𝒱={(u_1,u_2)∈ C : u_1∈∂ V} and thus condition (a) implies that
N_1(u)+μ h≠ u_1 if u∈∂ 𝒱 and μ> 0.
Hence, i_C(N,𝒱)=0.
On the other hand, let us fix ω∈ D and consider the homotopy H:[0,1]×𝒰→ C defined as
H(t,u)=(t N_1(u),t N_2(u)+(1-t) ω).
Note that u≠ H(t,u) for all u∈∂ 𝒰 and all t∈[0,1]. Otherwise, since T is fixed point free on ∂ 𝒰, there would exist u=(u_1,u_2)∈ C with u_1∈∂ U and t∈(0,1) such that u_1=t N_1(u), that is, (1/t)u_1=T_1(u), a contradiction with condition (a). Then, by the homotopy property of the fixed point index, we obtain that
i_C(N,𝒰)=i_C(H(1,·),𝒰)=i_C(H(0,·),𝒰)=1,
since (0,ω)∈𝒰.
Finally, by the additivity property of the index,
i_C(N,𝒰∖𝒱)=i_C(N,𝒰)-i_C(N,𝒱)=1,
and, by the definition of N we have that N=T on 𝒰∖𝒱, so we deduce that T has at least one fixed point in 𝒰∖𝒱.
The reasoning is analogous if condition (b) is satisfied. In that case we obtain i_C(N,𝒰)=0 and i_C(N,𝒱)=1. Therefore, it follows from the additivity property of the index that i_C(N,𝒰∖𝒱)=-1 and thus T has at least one fixed point in 𝒰∖𝒱.
From the proof of Theorem <ref>, one has that if T has no fixed points on ∂ (U∖V)× D, then i_C(T,(U∖V)× D )=1 provided that T_1 is a compressive operator (i.e., if T satisfies condition (a)) and that i_C(T,(U∖V)× D )=-1 if T is expansive (i.e., if condition (b) is fulfilled). These computations of the fixed point index are useful in order to obtain multiple fixed points of the operator T.
Clearly, in the particular case in which U and V are the intersection of two open balls with the cone K_1, we obtain the following result.
Let r,R∈ℝ_+ be positive numbers with r<R and D be a closed convex subset of the normed space Y.
Assume that T=(T_1,T_2):(K_1)_r,R× D→ K_1× D is a compact map and there exists h∈ K_1∖{0} such that either of the following conditions holds in (K_1)_r,R× D:
(a) T_1(u)+μ h≠ u_1 if u_1=r and μ> 0, and T_1(u)≠λ u_1 if u_1=R and λ> 1; or
(b) T_1(u)≠λ u_1 if u_1=r and λ> 1, and T_1(u)+μ h≠ u_1 if u_1=R and μ> 0.
Then T has at least a fixed point u=(u_1,u_2)∈ K_1× D with r≤u_1≤ R.
As a consequence of Theorem <ref>, we obtain the following criteria when the closed convex set D is an ordered interval [α,β] with the order, ≼, given by a cone K_2 in the normed space Y, that is,
[α,β]:={y∈ Y :α≼ y≼β}.
Let U and V be bounded and relatively open subsets of a cone K_1 of the normed space X such that 0∈ V⊂V⊂ U and α,β∈ Y such that α≼β.
Assume that T=(T_1,T_2):(U∖ V)×[α,β]→ K_1×[α,β] is a compact map and there exists h∈ K_1∖{0} such that either of the following conditions holds in (U∖ V)×[α,β]:
(a) T_1(u)+μ h≠ u_1 if u_1∈∂ V and μ> 0, and T_1(u)≠λ u_1 if u_1∈∂ U and λ> 1; or
(b) T_1(u)≠λ u_1 if u_1∈∂ V and λ> 1, and T_1(u)+μ h≠ u_1 if u_1∈∂ U and μ> 0.
Then T has at least a fixed point u=(u_1,u_2)∈ K_1× [α,β] with u_1∈U∖ V.
If the cone K_2 is normal, then the interval [α,β] is bounded in Y and thus (U∖ V)×[α,β] is a bounded subset of X× Y. Hence, the conclusion of Corollary <ref> remains valid if we assume that T=(T_1,T_2):K_1×[α,β]→ K_1×[α,β] is a completely continuous map (i.e., T is continuous and maps bounded sets into relatively compact ones) satisfying either condition (a) or (b).
If the operator T_2 is increasing with respect to the second variable, the assumption
T_2((K_1)_r,R×[α,β] )⊂ [α,β]
can be clearly deduced from the following one: for all u_1∈ (K_1)_r,R, the order relations α≼ T_2(u_1,α) and T_2(u_1,β)≼β hold.
Let r,R∈ℝ_+ be positive numbers with r<R and α,β∈ Y such that α≼β.
Assume that T=(T_1,T_2):(K_1)_r,R×[α,β]→ K_1× Y is a compact map and satisfies the following conditions:
* One of the following two conditions holds in (K_1)_r,R×[α,β]:
* T_1(u)⊀ u_1 if u_1=r and T_1(u)⊁ u_1 if u_1=R; or
* T_1(u)⊁ u_1 if u_1=r and T_1(u)⊀ u_1 if u_1=R.
* for all u_1∈ (K_1)_r,R, the map T_2(u_1,·) is increasing, α≼ T_2(u_1,α) and T_2(u_1,β)≼ β.
Then T has at least a fixed point u=(u_1,u_2)∈ K_1× Y with r≤u_1≤ R and α≼ u_2≼β.
Obviously, two distinct fixed points of the operator T can be obtained if the hypotheses of Theorem <ref> hold in two disjoint domains of the form (U_1∖ V_1)× D_1 and (U_2∖ V_2)× D_2. In case of operator systems (of two equations), where by a fixed point we mean a pair (u_1, u_2), two fixed points are different if they differ at least on one of their components, not necessarily on both. Hence, we can look for different fixed points in two disjoint regions of the form (U_j∖ V_j)× D, j=1,2, in such a way that the second component of both fixed points will be localized in the same set D.
Furthermore, if we can guarantee that these fixed points are not located on the boundary of the sets (U_j∖ V_j)× D, then the computation of the fixed point index ensures the existence of a third fixed point.
Let U_1, U_2, V_1 and V_2 be bounded and relatively open subsets of a cone K_1 of the normed space X such that 0∈ V_1, V_j⊂ U_j (j=1,2), U_1⊂ V_2 and D be a closed convex subset of the normed space Y.
Assume that T=(T_1,T_2):(U_2∖ V_1)× D→ K_1× D is a compact map and for each j∈{1,2} there exist h_j∈ K_1∖{0} such that the following conditions hold:
(i) T_1(u)+μ h_j≠ u_1 if u_1∈∂ V_j, u_2∈ D and μ≥ 0;
(ii) T_1(u)≠λ u_1 if u_1∈∂ U_j, u_2∈ D and λ≥ 1.
Then T has at least three distinct fixed points (u_1,u_2),(v_1,v_2),(w_1,w_2)∈ K_1× D with u_1∈ U_1∖V_1, v_1∈ U_2∖V_2 and w_1∈ V_2∖U_1.
Proof. By the computations of the fixed point index, we have
i_C(T,(U_1∖V_1)× D )=i_C(T,(U_2∖V_2)× D )=1, i_C(T,(V_2∖U_1)× D )=-1.
Therefore, the conclusion follows from the existence property of the fixed point index.
§ APPLICATION TO HAMMERSTEIN SYSTEMS
Consider the following system of Hammerstein type equations
[ u(t)=∫_0^1 k_1(t,s)f(s,u(s),v(s)) ds:=T_1(u,v)(t),; v(t)=∫_0^1 k_2(t,s)g(s,u(s),v(s)) ds:=T_2(u,v)(t), ]
where I:=[0,1] and the following assumptions are satisfied:
(H_1) the kernels k_1,k_2:I^2→ℝ_+ are continuous;
(H_2) there exist an interval [a,b]⊂ I, a function Φ_1:I→ℝ_+, Φ_1∈ L^1(I),
and a constant c_1∈ (0,1] satisfying
[ k_1(t,s) ≤Φ_1(s) for all t,s∈ I,; c_1 Φ_1(s) ≤ k_1(t,s) for all t∈[a,b], s∈ I; ]
(H_3) there exist a function Φ_2:I→ℝ_+, Φ_2∈ L^1(I),
and a constant c_2∈ (0,1] such that
c_2 Φ_2(s)≤ k_2(t,s)≤Φ_2(s) for all t,s∈ I;
(H_4) the functions f,g:I×ℝ_+^2→ℝ_+ are continuous.
In order to apply the theory in Section <ref>, let us consider the Banach space of continuous functions X=Y=𝒞(I) endowed with the usual maximum norm w_∞ =max_t∈ I|w(t)| and the cones
K_1 ={w∈𝒞(I) : w(t)≥ 0 for all t∈ I and min_t∈[a,b]w(t)≥ c_1w_∞},
K_2 ={w∈𝒞(I) : min_t∈[0,1]w(t)≥ c_2w_∞}.
Under assumptions (H_1)–(H_4), it can be proven by means of standard arguments (see, for instance, <cit.>) that T:=(T_1,T_2) maps the cone K:=K_1× K_2 into itself and it is completely continuous.
For ρ>0, we shall make use of the following open set
V_ρ={w∈ K_1 : min_t∈[a,b]w(t)<ρ}.
Observe that (K_1)_ρ⊂ V_ρ⊂ (K_1)_ρ/c_1, where (K_1)_ρ:={w∈ K_1 : w_∞<ρ}. The sets of type V_ρ were introduced by Lan in <cit.> and later employed by several authors, see <cit.> and the references therein.
We are in a position to establish an existence result for the system of Hammerstein type equations (<ref>) as a consequence of Theorem <ref>.
Assume that conditions (H_1)-(H_4) are fulfilled. Moreover, suppose that there exist positive numbers ρ_1,ρ_2>0, with ρ_1/c_1<ρ_2 (resp., ρ_2<ρ_1), and 0<α<β such that
(H_5) there exists a continuous function f:I→ℝ_+ such that
f(t)≤ f(t,u,v) on [a,b]× [ρ_1,ρ_1/c_1]× [α,β]
and
min_t∈[a,b]∫_a^bk_1(t,s)f(s) ds≥ρ_1;
(H_6) there exists a continuous function f:I→ℝ_+ such that
f(t,u,v)≤f(t) on [0,1]×[0,ρ_2]×[α,β]
and
max_t∈ [0,1]∫_0^1k_1(t,s)f(s) ds≤ρ_2;
(H_7) there exists a continuous function g:I→ℝ_+ such that
g(t)≤ g(t,u,v) on [a,b]×[ρ_1,ρ_2]×[α,β] (resp., on [a,b]×[c_1 ρ_2,ρ_1/c_1]×[α,β])
and
min_t∈[0,1]∫_a^bk_2(t,s)g(s) ds≥α;
(H_8) there exists a continuous function g:I→ℝ_+ such that
g(t,u,v)≤g(t) on [0,1]×[0,ρ_2]×[α,β] (resp., on [0,1]×[0,ρ_1/c_1]×[α,β])
and
max_t∈[0,1]∫_0^1k_2(t,s)g(s) ds≤β.
Then the system (<ref>) has at least one positive solution (u,v)∈ K such that ρ_1≤min_t∈[a,b]u(t), u_∞≤ρ_2 (resp., ρ_2≤u_∞, min_t∈[a,b]u(t)≤ρ_1) and α≤ v(t)≤β for all t∈ I.
Proof. Suppose that ρ_1/c_1<ρ_2 and consider the integral operator T=(T_1,T_2):((K_1)_ρ_2∖ V_ρ_1)×[α̂,β̂]→ K defined as above, where α̂ and β̂ denote the constant functions α̂(t)=α and β̂(t)=β for all t∈ I. In the sequel, with abuse of notation, α̂ and β̂ will be simply denoted as α and β, respectively. Let us apply Corollary <ref> in order to obtain a positive solution for the system (<ref>).
First, let us check that for every (u,v)∈((K_1)_ρ_2∖ V_ρ_1)×[α,β] the following conditions are satisfied:
1) T_1(u,v)+μ 1̂≠ u if u∈∂ V_ρ_1 and μ>0 (where 1̂ denotes the constant function equal to one);
2) T_1(u,v)≠λ u if u_∞=ρ_2 and λ> 1.
To verify that 1) holds, assume on the contrary that there exist (u,v) belonging to ((K_1)_ρ_2∖ V_ρ_1)×[α,β] with min_t∈[a,b]u(t)=ρ_1 and μ>0 such that T_1(u,v)+μ 1̂= u. Then we have
u(t)=∫_0^1 k_1(t,s)f(s,u(s),v(s)) ds+μ.
Since u∈ K_1 with min_t∈[a,b]u(t)=ρ_1, it follows from the definition of the cone K_1 that ρ_1≤ u(t)≤ρ_1/c_1 for all t∈[a,b]. Hence, for any t∈[a,b], we deduce from hypothesis (H_5) that
u(t)> ∫_a^b k_1(t,s)f(s,u(s),v(s)) ds≥∫_a^b k_1(t,s)f(s) ds≥ρ_1,
a contradiction.
Now, let us show that T_1(u,v)_∞≤ρ_2 for all (u,v)∈ K_1×[α,β] with u_∞=ρ_2, which clearly implies property 2). Note that, for such (u,v), we have 0≤ u(t)≤ρ_2 and α≤ v(t)≤β for all t∈ I. Thus, we get, for t∈ I,
T_1(u,v)(t)=∫_0^1 k_1(t,s)f(s,u(s),v(s)) ds≤∫_0^1 k_1(t,s)f(s) ds.
By condition (H_6), we deduce T_1(u,v)_∞≤max_t∈[0,1]∫_0^1 k_1(t,s)f(s) ds≤ρ_2.
Finally, it remains to show that T_2(((K_1)_ρ_2∖ V_ρ_1)×[α,β] )⊂[α,β]. Take (u,v)∈((K_1)_ρ_2∖ V_ρ_1)×[α,β] and let us check that α≤ T_2(u,v)(t)≤β for all t∈ I. On the one hand, being k_2 and g non-negative functions, we have that for t∈[0,1],
T_2(u,v)(t)=∫_0^1 k_2(t,s)g(s,u(s),v(s)) ds≥∫_a^b k_2(t,s)g(s,u(s),v(s)) ds
and thus, since ρ_1≤ u(t)≤ρ_2 for all t∈[a,b], condition (H_7) implies that
T_2(u,v)(t)≥min_t∈[0,1]∫_a^b k_2(t,s)g(s) ds≥α.
On the other hand, by condition (H_8), we have that for t∈[0,1],
T_2(u,v)(t)≤max_t∈[0,1]∫_0^1 k_2(t,s)g(s,u(s),v(s)) ds≤max_t∈[0,1]∫_0^1 k_2(t,s)g(s) ds≤β.
In conclusion, Corollary <ref> ensures that the operator T has at least one fixed point localized in the set ((K_1)_ρ_2∖V_ρ_1)×[α,β]. Finally, note that if ρ_2<ρ_1, the proof is analogous, but now considering the operator T defined in the set ( V_ρ_1∖ (K_1)_ρ_2)×[α,β].
As an illustrative example, we deal with the existence of positive solutions for the following system of second-order equations
[ u”(t)+f(t,u(t),v(t))=0, t∈[0,1],; v”(t)+g(t,u(t),v(t))=0, t∈[0,1], ]
coupled with the following boundary conditions
u(0)=u(1)=0=v'(0)=v(1)+v'(1).
The system (<ref>)–(<ref>) can be studied by means of a system of integral equations of the form (<ref>), where the kernels k_1 and k_2 are the corresponding Green's functions, which are given by
k_1(t,s)={[ (1-t) s, s≤ t,; t (1-s), s>t, ].
and
k_2(t,s)={[ 2-t, s≤ t,; 2-s, s>t. ].
It is well–known (see, for example, <cit.>) that the Green's function k_1 satisfies condition (H_2) if we take
Φ_1(s)=s(1-s), s∈ I, [a,b]=[1/4,3/4], c_1=1/4.
On the other hand, the Green's function k_2 was studied in <cit.> (see also <cit.>) and it was shown that it satisfies condition (H_3) with
Φ_2(s)=2-s, s∈ I, c_2=1/2.
Consider the system (<ref>)–(<ref>) with the continuous nonlinearities
f(t,u,v)=t u^2(1+sin^2 v), g(t,u,v)=t(2+sin u)(6+cos v).
Take ρ_1=64, ρ_2=4, α=1 and β=14, then it is easy to check that hypotheses (H_5)-(H_8) hold with f(t)=1024, f(t)=32, g(t)=5 t and g(t)=21 t, t∈ I.
Therefore, Theorem <ref> guarantees that the system (<ref>)–(<ref>) has a positive solution. We emphasize that the theory developed in <cit.> cannot be applied to this example since the function g(t,·,·) is not monotone in any rectangle of the form [0,B_1]×[0,B_2].
We run a numerical approximation for the solution (u,v) of the system, with the aid of the MATLAB Function BVP4c; the result is illustrated in Figure <ref>. We started with the initial guess
u_0(t)≈ 50.667 t -99.333 t^2 + 85.333 t^3 -42.667 t^4, v_0 (t)≈ 4 -0.2 t -1.6 t^2,
based on the properties of the desired solution.
The numerically obtained solution is consistent with the theoretical result.
It is worth mentioning that Theorem <ref> allows us to obtain other kind of localization of the solutions of the system (<ref>) depending on the closed convex set D we choose. Of course, this will be related to the behavior of the nonlinearity g. In order to illustrate this fact, let us choose D as a ball in the following existence result concerning system (<ref>).
Under hypotheses (H_1)-(H_3), let us assume that the functions f:I×ℝ_+×ℝ→ℝ_+ and g:I×ℝ_+×ℝ→ℝ are continuous and there exist ρ_1,ρ_2>0, with ρ_1/c_1<ρ_2 (resp., ρ_2<ρ_1), and R_2>0 such that the following conditions are satisfied:
(H_5^*) there exists a continuous function f_*:I→ℝ_+ such that
f_*(t)≤ f(t,u,v) on [a,b]× [ρ_1,ρ_1/c_1]× [-R_2,R_2]
and
min_t∈[a,b]∫_a^bk_1(t,s)f_*(s) ds≥ρ_1;
(H_6^*) there exists a continuous function f^*:I→ℝ_+ such that
f(t,u,v)≤f^*(t) on [0,1]×[0,ρ_2]×[-R_2,R_2]
and
max_t∈ [0,1]∫_0^1k_1(t,s)f^*(s) ds≤ρ_2;
(H_7^*) there exists a continuous function g^*:I→ℝ_+ such that
|g(t,u,v) |≤ g^*(t) on [0,1]×[0,R_1]×[-R_2,R_2] (where R_1:=max{ρ_1/c_1,ρ_2 })
and
max_t∈ [0,1]∫_0^1k_2(t,s)g^*(s) ds≤ R_2.
Then the system (<ref>) has at least one non-trivial solution (u,v)∈ K_1× X such that ρ_1≤min_t∈[a,b]u(t), u_∞≤ρ_2 (resp., ρ_2≤u_∞, min_t∈[a,b]u(t)≤ρ_1) and v_∞≤ R_2.
Proof. It follows as an application of Theorem <ref> to the operator T=(T_1,T_2) defined above and the choice of the set D as the closed ball of radius R_2, that is,
D={w∈𝒞(I) : w_∞≤ R_2 }.
The details are similar to those in the proof of Theorem <ref>.
Consider the system (<ref>)–(<ref>) with the nonlinearities
f(t,u,v)=t u^2(1+sin^2 v), g(t,u,v)=t e^v^2-2sin u.
A simple computation shows that max_t∈ I∫_0^1k_2(t,s) ds=3/2 in this case, so if we take ρ_1=64, ρ_2=4 and R_2=1, then it is easy to check that hypotheses (H_5^*)-(H_7^*) are satisfied.
Therefore, Theorem <ref> guarantees that the system (<ref>)–(<ref>) has at least one solution (u,v) such that u is positive (nontrivial) and |v(t)|≤ 1 for all t∈ I. Since g is a sign-changing nonlinearity, v is not necessarily positive, but clearly it cannot be the identically zero function.
Also in this case, we found a numerical approximation for one solution (u,v) of the system, with the aid of the same MATLAB Functions; the results are reported in Figure <ref>. As the initial guess we used the same u_0 as in the previous example, and as v_0 we chose a function proportional by a factor 10^3 to the solution
of second differential equation with g=g(t,u_0,v). After that, we iterated the solution starting from the previous one, up to when a relative tolerance not greater than 10^-7 was reached.
The obtained solution is consistent with the theoretical result.
§ ACKNOWLEDGEMENTS
G. Infante is a member of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM) and of the UMI Group TAA “Approximation Theory and Applications”. G. Mascali is a member of the Gruppo Nazionale per la Fisica Matematica (GNFM) of INdAM. G. Infante and G. Mascali are supported by the project POS-CAL.HUB.RIA.
J. Rodríguez–López has been partially supported by the VIS Program of the University of Calabria, and by Ministerio de Ciencia y Tecnología (Spain), AIE and Feder, grant PID2020-113275GB-I00.
This paper was written during a visit of J. Rodríguez-López to the Dipartimento di Matematica e Informatica of the Università della Calabria. J. Rodríguez-López is grateful to the people of the aforementioned Dipartimento for their kind and warm hospitality.
99
amann H. Amann, Fixed point equations and nonlinear eigenvalue problems in ordered Banach spaces, SIAM Rev., 18 4 (1976), 620–709.
CCI14 A. Cabada, J. Á. Cid and G. Infante, A positive fixed point theorem with applications to systems of Hammerstein integral equations, Bound. Value Probl. (2014) 2014:254.
CPR T. Cardinali, R. Precup and P. Rubbioni, Heterogeneous Vectorial Fixed Point Theorems. Mediterr. J. Math., 14 83 (2017), 1–12.
dug J. Dugundji, An extension of Tietze's theorem, Pacific J. Math., 1 (1951), 353–367.
fig_tojo R. Figueroa and F. A. F. Tojo, Fixed points of Hammerstein-type equations on general cones, Fixed Point Theory, 19 2 (2018), 571–586.
GraDug A. Granas and J. Dugundji, Fixed Point Theory, Springer, New York, 2003.
guolak D. Guo and V. Lakshmikantham,
Nonlinear problems in abstract cones, Academic Press, Boston,
1988.
INOP V. Ilea, A. Novac, D. Otrocol and R. Precup, Nonlinear alternatives of hybrid type for nonself vector-valued maps and application, Fixed Point Theory, 24 1 (2023), 221–232.
Inf G. Infante, A short course on positive solutions of systems of ODEs via fixed point index, Lecture Notes in Nonlinear Analysis (LNNA), 16 (2017), 93–140.
ima G. Infante and M. Maciejewski, Multiple positive solutions of parabolic systems with nonlinear, nonlocal initial conditions, J. London Math. Soc., 94 (2016), 859–882.
imap G. Infante, M. Maciejewski and R. Precup, A topological approach to the existence
and multiplicity of positive solutions of (p, q)-Laplacian systems, Dynamics of PDE, 12 3 (2015), 193–215.
kras M. A. Krasnosel'skiĭ, Positive Solutions of Operator Equations. Noordhoff, Groningen, 1964.
Lan K. Q. Lan, Multiple positive solutions of semilinear differential equations with singularities, J. London Math. Soc., 63 (2001), 690–704.
Lan1 K. Q. Lan, Coexistence fixed point theorems in product Banach spaces and applications, Math. Meth. Appl. Sci., 44 (2021), 3960–3984.
Murray
J. D. Murray, Mathematical biology. II: Spatial models and biomedical applications, Third edition. Interdisciplinary Applied Mathematics, 18. Springer–Verlag, New York, (2003).
PrecupFPT R. Precup, A vector version of Krasnosel'skiĭ's fixed point theorem in cones and positive periodic solutions of nonlinear systems, J. Fixed Point Theory Appl., 2 (2007), 141–151.
PrecupSDC R. Precup, Componentwise compression-expansion conditions for systems of nonlinear operator equations and applications, Mathematical models in engineering, biology and medicine, 284–293, AIP Conf. Proc., 1124, Amer. Inst. Phys., Melville, NY (2009).
PreRod R. Precup and J. Rodríguez-López, Multiplicity results for operator systems via fixed point index, Results Math., 74:15 (2019), 1–14.
JRL J. Rodríguez-López, A fixed point index approach to Krasnosel'skiĭ-Precup fixed point theorem in cones and applications, Nonlinear Anal., 226 (2023), No. 113138, 1–19.
Webb J. R. L. Webb, A class of positive linear operators and applications to nonlinear boundary value
problems, Topol. Methods Nonlinear Anal., 39 (2012), 221–242.
|
http://arxiv.org/abs/2307.04407v1 | 20230710081327 | Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution | [
"Hossein Rastgoftar"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution
Hossein Rastgoftar
H. Rastgoftar is with the Department
of Aerospace and Mechanical Engineering, University of Arizona, Tucson,
AZ, 85721 USA e-mail: [email protected].
August 12, 2023
=====================================================================================================================================================================================
This paper proposes a new architecture for multi-agent systems to cover an unknowingly distributed fast, safely, and decentralizedly. The inter-agent communication is organized by a directed graph with fixed topology, and we model agent coordination as a decentralized leader-follower problem with time-varying communication weights. Given this problem setting, we first present a method for converting communication graph into a neural network, where an agent can be represented by a unique node of the communication graph but multiple neurons of the corresponding neural network. We then apply a mass-cetric strategy to train time-varying communication weights of the neural network in a decentralized fashion which in turn implies that the observation zone of every follower agent is independently assigned by the follower based on positions of in-neighbors. By training the neural network, we can ensure safe and decentralized multi-agent coordination of coverage control. Despite the target is unknown to the agent team, we provide a proof for convergence of the proposed multi-agent coverage method.
The functionality of the proposed method will be validated by a large-scale multi-copter team covering distributed targets on the ground.
Large-Scale Coordination, Multi-Agent Coverage, and Decentralized Control.
§ INTRODUCTION
Multi-agent coberage has been received a lot of attentions by the control community over the recent years.
Multi-agent coverage has many applications such as wildfire management <cit.>, border security <cit.>, agriculture <cit.>, and wildlife monitoring <cit.>. A variety of coverage approaches have been proposed by the researchers that are reviewed in Section <ref>.
§.§ Related Work
Sweep <cit.> and Spiral <cit.> are two available methods used for the single-vehicle coverage path planning, while Vehicle Routing Problem <cit.> is widely used for the multi-agent coverage path planning. Diffusion-based multi-agent coverage convergence and stability are proposed in Ref. <cit.>. Decentralized multi-agent coverage using local density feedback is achieved by applying discrete-time mean-field model in Ref. <cit.>. Multi-agent coverage conducted by unicycle robots guided by a single leader is investigated in Ref. <cit.>, where the authors propose to decouple coordination and coverage modes. Adaptive decentralized multi-agent coverage is studied in <cit.>. Ref. <cit.> offers a multiscale analysis of multi-agent coverage control that provides the convergence properties in continuous time. Human-centered active sensing of wildfire by unmanned aerial vehicles is studied in Ref. <cit.>. Ref. <cit.> suggests to apply k-means algorithm for planning of zone coverage by multiple agents. Reinforcement Learning- (RL-) based multi-agent coverage control is investigated by Refs. <cit.>. Authors in <cit.> used Vononoi-based approach for covering a distributed target. Vononoi-based coverage in the presence of obstacles and failures is presented as a leader-follower problem in Ref. <cit.>. Ref. <cit.> experimentally evaluate functionality Voronoi-based and other multi-agent coverage approaches in urban environment.
§.§ Contributions
This paper develops a method for decentralized multi-agent coverage of a distributed target with an unknown distribution. We propose to define the inter-agent communications by a deep neural network, which is called coverage neural network, with time-varying weights that are obtained such that coverage convergence is ensured.
To this end, the paper establishes specific rules for structuring the coverage neural network and proposes a mass-centric approach to train the network weights, at any time t, that specify inter-agent communication among the agent team. Although, the target is unknown to the agent team, we prove that the weights ultimately converge to the unique values that quantify target distribution in the motion space. The functionality of the proposed coverage method will be validated by simulating aerial coverage conducted by a team of quadcopetr agents.
Compared to the existing work, this paper offers the following novel contributions:
* The proposed multi-agent coverage approach learns the inter-agent communication weights in a forward manner as opposed to the existing neural learning problem, where they are trained by combining forward and backward iterations. More specifically, weights input to a hidden layer are assigned based on the (i) outputs of the previous layer and (i) target data information independently measured by observing the neighboring environment. We provide the proof of convergence for the proposed learning approach.
* The paper proposes a method for converting inter-agent communication graph into a neural network that will be used for organizing the agents, structuring the inter-agent communications, and partitioning the coverage domain.
* The paper develops a method for decentralized partitioning and coverage of an unknowingly distributed target. This method is indeed more computationally-efficient than the the available Voronoi-based partitioning methods that require all agents' positions to determine the search subdomain allocated to each individual agent.
§.§ Outline
The remainder of the paper is organized as follows: The Problem Statement and Formulation are given in Section <ref>. The paper methodology is presented in Section <ref>. Assuming every agent is a quadcopter, the multi-agent network dynamics is obtained in Section <ref>, and followed by Simulation Results in Section <ref> and Conclusion in Section <ref>.
§ PROBLEM STATEMENT AND FORMULATION
We consider a team of N agents identified by set 𝒱={1,⋯,N} and classify them into the following three groups:
* “boundary” agents identified by 𝒱_B={1,⋯,N_B} are distributed along the boundary of the agent team configuration;
* a single “core” agent identified by singleton 𝒱_C={N_B+1} is an interior agent with the global position representing the global position of the agent configuration; and
* follower agents defined by 𝒱_I={N_B+2,⋯,N} are all located inside the agent team configuration.
Note that 𝒱_B, 𝒱_C, and 𝒱_I are disjoint subsets of 𝒱, i.e. 𝒱=𝒱_B⋃𝒱_C⋃𝒱_I.
Inter-agent commucication among the agents are defined by graph 𝒢(𝒱,ℰ) where ℰ⊂𝒱×𝒱 defines edges of graph 𝒢 and each edge represents a unique communication link (if (j,i)∈ℰ, then, i accesses position of j∈𝒱).
We define
𝒩_i={j∈𝒱:(j,i)∈ℰ}, ∀ i∈𝒱.
as the set of in-neighbors of every agent i∈𝒱.
§.§ Neural Network Representation of Inter-Agent Communication
Graph 𝒢 is defined such that it can be represented by a deep neural network with M+1 layers, where we use set ℳ={0,⋯,M} to define the layer identification numbers. Set 𝒱 can be expressed as
𝒱=⋃_l∈ℳ𝒱_l
where 𝒱_0 through 𝒱_M are disjoint subsets of 𝒱. We use 𝒲_0, 𝒲_1, ⋯, 𝒲_M to identify the neuron of layers 0 through M of the coverage neural network, and 𝒲_l and 𝒱_l are related by
𝒲_l=𝒱_l l∈{0,M}
𝒲_l-1⋃𝒱_l l∈ℳ∖{0,M}
,
where 𝒲_0=𝒱_0=𝒱_B⋃𝒱_C defines neurons that uniquely represent boundary and core agents.
For every neuron i∈𝒲_l at layer l∈ℳ∖{0}, ℐ_i,l∈𝒲_l-1 defines those neurons of 𝒲_l-1 that are connected to i∈𝒲_l.
Assuming the agent team forms an n-dimensional configuration in a three-dimensional motion space (n=2,3), we use the following key rules to define ℐ_i,l for every i∈𝒲_l and l∈ℳ∖{0}:
|ℐ_i,l|=
1 If i∈𝒲_l-1⋂𝒲_l and l∈ℳ∖{0}
n+1 If i∈𝒲_l-𝒲_l-1 and l∈ℳ∖{0}
n+1 If i∈𝒲_M
0 If i∈𝒲_0
.
We note that 𝒩_i and ℐ_i,l can be related by
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l- 𝒲_l-1(ℐ_i,l=𝒩_i).
For better clarification, we consider an agent team with N=26 agents identified by set 𝒱={1,⋯,26} forming a two-dimensional configuration (n=2) shown in Fig. <ref> (a). The inter-agent communications shown in Fig. <ref> (a) can be represented by the neural network of Fig. <ref> (b) with three layers ℳ={0,1,2}, where 𝒲_0={1,⋯,6}, defining the boundary and core leaders, has no in-neighbors 𝒲_2={8,9,10, 12,13,14,16,17,18,20,21,22, 24,25,26} defining followers, each has three in-neighbors. Also, {7,11,15,19,23}∈𝒲_1 each has three in-neighbors but the remaining neurons of {1,⋯,6}, that are repeated in layer 0, each has one in-neighbor.
§.§ Differential Activation Function
Unlike the available neural network, the activation of the coverage network's neurons are operated differential activation functions given by nonlinear dynamics
𝐱̇_i=𝐟_i(𝐱_i,𝐮_i)
𝐫_i=𝐡_i(𝐱_i)
, i∈𝒲_l, l∈ℳ,
that is used to model the agent i∈𝒱_l (See Fig. <ref>), where 𝐱_i∈ℝ^n_x,i and 𝐮_i∈ℝ^n_u,i denote the state vector and the control of neuron i, respectively, and 𝐡_i:ℝ^n_x,i→ℝ^3, 𝐟_i:ℝ^n_x,i→ℝ^n_x,i, and 𝐠_i:ℝ^n_x,i→ℝ^n_x,i× n_u,i are smooth functions.
The output of neuron i denoted by 𝐫_i∈ℝ^3× 1 is the position of agent i. The input of neuron i is defined by
𝐫_i,d(t) = 𝐩_i (given) i ∈𝒲_0
∑_j ∈ℐ_i,l w_ij(t)𝐫_j(t) i ∈𝒲_l-𝒲_l-1, l∈ℳ∖{0}
where 𝐩_i is a desired constant position for leader agent i ∈𝒲_0.
Also, w_i,j(t) > 0 is the time-varying communication weight between i∈𝒲_l and j ∈ℐ_i,l, and satisfies the following constraint:
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l- 𝒲_l-1(∑_j∈ℐ_i,lw_i,j(t)=1), ∀ t.
§.§ Objectives
Given above problem setting, this paper offers a neural-network-based method for optimal coverage of target set 𝒟 with unknown distribution in a 3-dimensional motion space. To achieve this objective, we assume that positions of boundary leader agents, defined by 𝒲_0∖𝒱_C, are known, and solve the following two main problems:
* Problem 1–Abstract Representation of Target: We develop a mass-centric approach in Section <ref> to abstractly represent target by N-N_B+1 position vectors 𝐩_N_B+2 through 𝐩_N that are considered as followers' desired positions.
* Problem 2–Decentralized Target Acquisition: We propose a forward method to train the communication weights w_i,j(t), and assign control input 𝐮_i, for every agent i∈𝒱 and in-neighbor agent j∈ℐ_i,l, such that actual position 𝐫_i converges to the desired position 𝐩_i in a decentralized fashion, for every i∈𝒱∖𝒲_0, where i∈𝒱∖𝒲_0 does not know global position 𝐩_j(t) of any in-neighbor agent j∈𝒱.
Without loss of generality, n is either 2, or 3 because motion space is three-dimensional. More specifically, for ground coverage n=2 and 𝒟
specifies finite number of targets on the ground.
§ METHODOLOGY
The agent team is aimed to cover a zone that is specified by 𝒟={1,⋯,n_d}, where 𝐝_i∈ℝ^3×1 is the position of target i∈𝒟.
We also define intensity function 𝒯:𝒟→(0,1] to quantify the intensity of data point i∈𝒟 positioned at i∈𝒟.
For development of the neural-networ-based coverage model, we apply the following Definitions and Assumptions:
Boundary leader agents form an n-D polytope in ℝ^n, thus, the boundary agents' desired positions
must satisfy the following rank condition:
rank([ 𝐩_2-𝐩_1 ⋯ 𝐩_N_B-𝐩_1 ])
=n
The polytope defined by the boundary agents is called leading polytope.
The leading polytope, defined by the boundary agents, can be decomposed into N_L disjoint n-dimensional simplexes all sharing the core node N_B+1∈𝒲_0.
We let ℒ={1,⋯,N_L} define all simplex cells of the leading polytope, where 𝒮_i={h_i,1,⋯,h_i,n,N_B+1} defines vertices of simplex cell i∈ℒ, i.e. h_i,1,⋯,h_n,i∈𝒮_i∖{N_B+1}⊂𝒲_0 are the boundary nodes of simplex i∈ℒ. Per Assumption <ref>, we can write
𝒲_0=⋃_i∈ℒ𝒮_i,
⋀_i∈ℒ(rank([ 𝐩_h_i,1-𝐩_N_B+1 ⋯ 𝐩_h_i,n-𝐩_N_B+1 ])
=n).
Every agent i∈𝒱∖𝒲_0 has n+1 in-neighbors, therefore,
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l-𝒲_l-1(|ℐ_i,l|=n+1).
The in-neighbors of every agent i∈𝒱∖𝒲_0 defined by 𝒩_i={j_1,⋯,j_n+1} forms an n-D simplex. This condition can be formally specified as follows:
⋀_l∈ℳ∖{0}⋀_i∈𝒲_l-𝒲_l-1(rank([ 𝐩_j_2-𝐩_j_1 ⋯ 𝐩_j_n+1-𝐩_j_1 ])
=n).
For every agent i∈𝒱∖𝒲_0,
0.99!
𝒞̅_i={∑_j∈ℐ_i,lσ_j𝐩_j:σ_j≥0 and ∑_j∈ℐ_i,lσ_j=1}, i∈𝒲_l-𝒲_l-1, l∈ℳ,
0.99!
𝒞_i(t)={∑_j∈ℐ_i,lσ_j𝐫_j(t):σ_j≥0 and ∑_j∈ℐ_i,lσ_j=1}, i∈𝒲_l-𝒲_l-1, l∈ℳ,
define the convex hulls specified by “desired” and “actual” positions of agent i's in-neighbors, respectively.
We define
𝒞=⋃_l∈ℳ∖{0}⋃_i∈𝒲_l-𝒲_l-1𝒞̅_i
𝒞=⋃_l∈ℳ∖{0}⋃_i∈𝒲_l-𝒲_l-1𝒞_i(t)
specify the coverage zone that enclose all data points defined by set 𝒟.
By considering Definition <ref>, we can express set 𝒟 as
𝒟=⋃_i∈ℐ_i,l𝒟̅_i or 𝒟=⋃_i∈ℐ_i,l𝒟_i(t),
where
𝒟̅_i={j∈𝒟:𝐝_j∈𝒞̅_i},
is the target set that is “desired” to be searched by follower agent i∈𝒱∖𝒱_0 whereas
𝒟_i(t)={j∈𝒟:𝐝_j∈𝒞_i(t)},
is the subset of 𝒟 that is “actually” searched by follower agent i∈𝒱∖𝒱_0 at time t. Note that 𝒟̅_i and 𝒟_i(t) are enclosed by the convex hulls 𝒞̅_i and 𝒞_i(t), respectively, that are determined by the “desired” and “actual” positions of the agent i∈𝒱∖𝒲_0, respectively.
We assume that 𝒟̅_i≠∅ and 𝒟_i(t)≠∅, at any time t, for every i∈𝒱∖𝒲_0.
In order to assure that Assumption <ref> is satisfied, we may need to regenerate target set 𝒟, when target data set 𝒟 is scarcely distributed. When this regeneration is needed, we first convert discrete set 𝒟 to discrete set
𝒟'={𝐝=∑_i=1^n_d𝒩(𝐫; 𝐝_i,Σ_i):𝐝_i∈𝒟, 𝐫∈𝒞}
where 𝒩(𝐫; 𝐝_i,Σ_i) is a multi-variate normal distribution specified by mean vector 𝐝_i and covariance matrix Σ_i. Then, we regenerate 𝒟 by uniform dicretization of 𝒟.
§.§ Abstract Representation of Target Locations
We use the approach presented in Algorithm <ref> to abstractly represent target set 𝒟 by position vectors 𝐩_N_B+2, ⋯, 𝐩_N, given (i)
desired positions of leader agents denoted 𝐩_1 through 𝐩_N_B+1, (ii) the edge set ℰ, and (iii) target set 𝒟, as the input. Note that 𝐩_i is considered the global desired position of follower i∈𝒱_I={N_B+2,⋯,N}, but no follower i∈𝒱∖𝒱_0 knows 𝐩_i.
The desired position of every follower agent i∈𝒱_I=𝒱∖𝒲_0 is obtained by
𝐩_i=
∑_h∈𝒟̅_i𝒯_h(h)𝐝_h|𝒟̅_i|, ∀ i∈𝒱∖𝒲_0,
where 𝒟̅_i, defined by Eq. (<ref>), is a target data subset that is enclosed by 𝒞̅_i and defined by Eq. (<ref>). We notice that the desired position of every follower agent i∈𝒱∖𝒲_0 is assigned in a “forward” manner which in turn implies that 𝒲_l's desired positions are assigned after determining 𝒲_l-1's desired positions, for every l∈ℳ∖{0}.
Given desired positions of every follower agent i∈𝒱∖𝒲_0 and every in-neighbor agent j∈𝒩_i, ϖ_i,j>0 defines the desired communication weight between i∈𝒱∖𝒲_0 and j∈𝒩_i, and is obtained by solving n+1 linear algebraic equations provided by
𝐩_i=∑_j∈ℐ_i,lϖ_i,j𝐩_j,
∑_j∈ℐ_i,lϖ_i,j=1.
Algorithm <ref> also presents our proposed hierarchical approach for assignment of followers' desired communication weights.
We define desired weight matrix 𝐋̅=[L̅_ij]∈ℝ^N× N with (i,j) entry
L̅_ij=ϖ_i,j i∈𝒱∖𝒲_0, j∈𝒩_i
-1 i=j
0 otherwise
.
§.§ Decentralized Target Acquisition
For a decentralized coverage, it is necessary that every follower agent i∈𝒱_l=𝒲_l-𝒲_l-1, represented by a neorn in layer l∈ℳ∖{0}, chooses control 𝐮_i∈ℝ^n_u× 1, based on actual positions of the in-neighbor agents ℐ_i,l, such that 𝐫_i(t) stably tracks 𝐫_i,d(t) that is defined by Eq. (<ref>). Note that 𝐫_i,d(t) is a linear combination of the in-neighbors' actual positions, for i∈𝒱∖𝒲_0, with (communication) weights that are time-varying and constrained to satisfy equality constraint (<ref>).
We use forward training to learn the coverage neural network. This means that communication weights of layer l ∈ℳ∖{ 0 } neurons are assigned before communication weights of layer l+1 ∈ℳ∖{ 0,M} neurons, where communication weight of neuron i ∈𝒱_l=𝒲_l-𝒲_l-1 is learned by solving a quadratic program. Let
𝐫̅_i(t)=
∑_h∈𝒟_i(t)𝒯(h)𝐝_h(t)|𝒟_i(t)|, i∈𝒱_l, l∈ℳ∖{0},
denote the cetroid of subset set 𝒟_i(t)⊂𝒟, where 𝒟_i(t)⊂𝒟 is defined (obtained) by Eq. (<ref>). Then, followers' communication weights are determined by minimizing
min∑_h∈ℐ_i,lw_i,h(t)𝐫_j-𝐝̅_i(t^2
subject to equality constraint (<ref>).
We define weight matrix 𝐋=[L_ij]∈ℝ^N× N with (i,j) entry
L_ij=
w_i,j i∈𝒱∖𝒲_0, j∈𝒩_i
-1 i=j
0 otherwise
.
Assume every agent i∈𝒱 chooses control input 𝐮_i such that 𝐫_i(t) asymptotically tracks 𝐫_i,d(t). Then, 𝐫_i(t) asymptotically converges to the desired position 𝐩_i for every i∈𝒱.
If every agent j∈𝒲_0 asymptotically tracks 𝐫_j,d(t), then, actual position 𝐫_j converges to 𝐩_j because 𝐫_j,d(t)=𝐩_j is constant per Eq. (<ref>). Then, for every i∈𝒲_1, vertices of the simplex 𝒞̅_i, belonging to 𝒲_0, asymptotically converge to the vertices 𝒞̅_i, where 𝒞̅_i and 𝒞_i enclose target data subsets 𝒟̅_i and 𝒟_i, respectively. This implies that 𝐫_i,d(t), defined as the centroid of 𝒟_i(t) asymptotically converges to 𝐩_i for every i∈𝒲_1. By extending this logic, we can say that this convergence is propagated through the feedforward network 𝒢(𝒱,ℰ). As the result, for every agent i∈𝒲_l and layer l∈ℳ∖{0}, vertices of simplex 𝒞_i(t) asymptotically converge the vertices of 𝒞̅_i which in turn implies that 𝐫_i,d(t) asymptotically converges to 𝐩_i. This also implies that 𝐫_i asymptotically converges to 𝐩_i per the theorem's assumption.
§ NETWORK DYNAMICS
In this section, we suppose that every agent is a quacopter and
use the input-state feedback linearization presented in <cit.> and summerized in the Appendix to model quadcopter motion by the fourth-order dynamics (<ref>) in the Appendix. Here, we propose to choose 𝐯_i as follows:
𝐯_i=-k_1,i⃛𝐫_i-k_2,i𝐫̈_i-k_3,i𝐫̇_i+k_4,i(𝐫_i,d(t)-𝐫_i), i∈𝒱,
where 𝐫_i,d(t) is defined by Eq. (<ref>). Then, the external dynamics of the quadcopter team is given by <cit.>
ddt(
[ 𝐘; 𝐘̇; 𝐘̈; ⃛𝐘; ])
=𝐀_MQS[ 𝐘; 𝐘̇; 𝐘̈; ⃛𝐘; ]
+
𝐁_MQS[ 𝐑_L; 𝐑̇_L; 𝐑̈_L; ⃛𝐑_L; ]
,
where 𝐘=vec([ 𝐫_1 ⋯ 𝐫_N ]^T), 𝐑_L=vec([ 𝐩_1 ⋯ 𝐩_N_B+1 ]^T), 𝐋_0=[ 𝐈_N_B+1 0_(N_B+1)×(N-N_B-1) ]^T∈ℝ^N×(N_B+1),
𝐀_MQS=
[ 0 𝐈_3N 0 0; 0 0 𝐈_3N 0; 0 0 0 𝐈_3N; 𝐈_3⊗( 𝐊_4 𝐋) -𝐊_3𝐈_3N -𝐊_2𝐈_3N -𝐊_1𝐈_3N ]
,
0.99!
𝐁_MQS=
[ 0 0 0 0; 0 0 0 0; 0 0 0 0; 𝐈_3⊗( 𝐊_4𝐋_0) 𝐈_3⊗(𝐊_3𝐋_0) 𝐈_3⊗( 𝐊_2𝐋_0) 𝐈_3⊗( 𝐊_1 𝐋_0) ]
,
j=1,2,3,4, 𝐊_j=diag(k_j,1,⋯,k_j,N),
𝐈_3N∈ℝ^3N× 3N is the identity matrix, and “vec” is the matrix vectorization operator.
Note that control gains k_j,i (i∈𝒱 and j=1,2,3,4) are selected such that roots of the characteristic equation
|s^4𝐈+s^3𝐊_1𝐋+s^2𝐊_2𝐋+s𝐊_3+𝐊_4|=0
are all located the collective dynamics (<ref>) is stable.
§ SIMULATION RESULTS
We consider an agent team consisting of 57 quadcopters with the reference configuration shown in Fig. <ref>, where we use the model and trajectory control presented in Refs. <cit.> for multi-agent coverage simulation. Here quadcopters 1 through 4 defined by set 𝒱_B={1,2,3,4} are the boundary leader agents; agent 5 defined by singleton 𝒱_C={5} is core leader; and the remaining agents defined by 𝒱_I={6,⋯,57} are followers.
The inter-agent communications are directional and shown by blue vectors in Fig. <ref>. The communication graph is defined by 𝒢(𝒱,ℰ) and converted into the neural network shown in Fig. <ref> with four layers, thus, ℳ={0,1,2,3} (M=3), and 𝒱 can be expressed as 𝒱=𝒲_0⋃𝒲_1⋃𝒲_2⋃𝒲_3. In Fig. <ref>, the agents represented by 𝒲_0, 𝒲_1, 𝒲_2, and 𝒲_3 are colored by cyan, red, green, and black, respectively.
We apply the proposed coverage algorithm to cover elliptic, multi-circle, and triangular zones, each specified by the corresponding data set 𝒟, where 𝒟 defines 500 data points shown by green spots in Figs. <ref> (a,b,c). As shown, each target set is represented by 52 points positioned at 𝐩_6 through 𝐩_57, where they are obtained by using the approach presented in Section <ref>. These points are shown by red in Figs. <ref> (a,b,c).
Figures <ref> shows the components of actual and desired positions of quadcopters 13, 45, and 51 are plotted versus time overt time interval [0,20]s, by solid black and dashed red, respectively. As seen the actual position of these three agents almost reach the designated desired positions at time t=12s. Figure <ref> shows the time-varying communication weights of agent 41 with its in-neighbors defined by 𝒩_41={34,5,32}. As shown, w_41,j(t) converges to its desired value of ϖ_41,j in about 12 seconds for every j∈𝒩_41.
§ CONCLUSION
We proposed a novel neural-network-based approach for multi-agent coverage of a target with unknown distribution. We developed a forward approach to train the weights of the coverage neural network such that: (i) the target is represented by a finite number of points, (ii) the multi-agent system quickly and decentralizedly converge to the designated points representing the target distribution. For validation, we performed a simulation of multi-agent coverage using a team of 57 quadcopters, each of which is represented by at least one neuron of a the coverage neural network. The simulation results verified fast and decentralized convergence of the proposed multi-agent coverage where each quadcopter reached its designated desired position in about 12 seconds.
IEEEtran
Let x_i, y_i, and z_i denote position components of quadcopter i∈𝒱, and p_i, m_i ψ_i, θ_i, and ψ_i denote the thrust force magnitude, mass, roll, pitch, yaw angles of quadcopter i∈𝒱, and g=9.81m/s^2 be the gravity acceleration. Then, we can use the model developed in <cit.> and present the quadcopter dynamics by
𝐱̇_i=𝐟(𝐱_i,𝐮_i)
,
where 𝐟(𝐱_i,𝐮_i)=𝐅(𝐱_i)+𝐆(𝐱_i)𝐮_i
0.99!
𝐱_i=[ x_i y_i z_i ẋ_i ẏ_i ż_i ϕ_i θ_i ψ_i ϕ̇_i θ̇_i ψ̇_i p_i ṗ_i ]
^T,
𝐮_i=[ u_1,i u_2,i u_3,i u_4,i ]
^T,
𝐅(𝐱_i)=[ ẋ_i; ẏ_i; ż_i; p_i m(sinϕ_isinψ_i + cosϕ_icosψ_isinθ_i); p_i m(cosϕ_isinψ_isinθ_i- sinϕ_icosψ_i); p_i mcosϕ_icosθ_i-9.81; ϕ̇_i; θ̇_i; ψ̇_i; 0; 0; 0; ṗ_i; 0; ]
,
𝐆(𝐱_i)=[ 𝐠_1 𝐠_2 𝐠_3 𝐠_4 ]
=
[ 0_9× 1 0_9× 3; 0_3× 1 𝐈_3; 0 0_1× 3; 1 0_1× 3; ]
,
By defining transformation 𝐱_i→(𝐫_i,𝐫̇_i,𝐫̈_i,⃛𝐫_i,ψ_i,ψ̇_i), we can use the input-state feedback linearization approach presented in <cit.> and convert the the quadcopter dynamics to the following external dynamics:
⃜𝐫_i=𝐯_i,
ψ̈_i=u_ψ,i,
where 𝐯_i is related to the control input of quadcopter i∈𝒱, denoted by 𝐮_i, by <cit.>
𝐯_i=𝐌_1,i𝐮_i+𝐌_2,i,
with
𝐌_1,i= [ L_𝐠__1L_𝐟^3x_i L_𝐠__2L_𝐟^3x_i L_𝐠__3L_𝐟^3x_i L_𝐠__4L_𝐟^3x_i; L_𝐠__1L_𝐟^3y_i L_𝐠__2L_𝐟^3y_i L_𝐠__3L_𝐟^3y_i L_𝐠__4L_𝐟^3y_i; L_𝐠__1L_𝐟^3z_i L_𝐠__2L_𝐟^3z_i L_𝐠__3L_𝐟^3z_i L_𝐠__4L_𝐟^3z_i; L_𝐠__1L_𝐟ψ_i L_𝐠__2L_𝐟ψ_i L_𝐠__3L_𝐟ψ_i L_𝐠__4L_𝐟ψ_i; ]∈ℝ^14× 14
,
𝐌_2,i= [ L_𝐟^4x_i L_𝐟^4y_i L_𝐟^4z_i L_𝐟^2ψ_i ]
^T∈ℝ^14× 1
.
In this paper, we assume that the desired yaw angle and its time derivative are both zero at any time t, and choose
u_ψ,i=-k_5ψ̇_i-k_6ψ_i
Therefore, we can assume that ψ_i(t)=0 at any time t, as a result, the quadcopter i∈𝒱 can be modeled by Eq. (<ref>).
[
< g r a p h i c s >
]
Hossein Rastgoftar an Assistant Professor at the University of Arizona. Prior to this, he was an adjunct Assistant Professor at the University of Michigan from 2020 to 2021. He was also an Assistant Research Scientist (2017 to 2020) and a Postdoctoral Researcher (2015 to 2017) in the Aerospace Engineering Department at the University of Michigan Ann Arbor. He received the B.Sc. degree in mechanical engineering-thermo-fluids from Shiraz University, Shiraz, Iran, the M.S. degrees in mechanical systems and solid mechanics from Shiraz University and the University of Central Florida, Orlando, FL, USA, and the Ph.D. degree in mechanical engineering from Drexel University, Philadelphia, in 2015. His current research interests include dynamics and control, multiagent systems, cyber-physical systems, and optimization and Markov decision processes.
|
http://arxiv.org/abs/2307.04790v2 | 20230710180003 | The hunt for formamide in interstellar ices: A toolkit of laboratory infrared spectra in astronomically relevant ice mixtures and comparisons to ISO, Spitzer, and JWST observations | [
"Katerina Slavicinska",
"Marina Gomes Rachid",
"Will Robson Monteiro Rocha",
"Ko-Ju Chuang",
"Ewine Fleur van Dishoeck",
"Harold Linnartz"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.IM",
"astro-ph.SR"
] |
A toolkit of laboratory infrared spectra in astronomically relevant ice mixtures and comparisons to ISO, Spitzer, and JWST observations
Laboratory for Astrophysics, Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands.
[email protected]
Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands.
Max Planck Institut für Extraterrestrische Physik (MPE), Giessenbachstrasse 1, 85748 Garching, Germany
Although solid-state pathways are expected to dominate the formation mechanisms of many complex organic molecules (COMs), very few COMs have been securely identified in interstellar ices, in stark contrast with the many COM detections in the gas phase. The launch of the James Webb Space Telescope (JWST) and its increase in sensitivity and spectral resolution opens the possibility of identifying more COMs in ices, but additional laboratory data are necessary. Formamide (NH_2CHO) is one such COM that is of great interstellar and prebiotic relevance where more laboratory data are needed in the hunt for its presence in interstellar ices.
This work aims to characterize the mid-IR spectra of formamide in its pure form as well as in mixtures of the most abundant interstellar ices via laboratory simulation of such ices, as well as to demonstrate how these laboratory spectra can be used to search for formamide in ice observations.
Mid-IR spectra (4000 - 500 cm^-1/2.5 - 20 μm) of formamide, both in its pure form as well as in binary and tertiary mixtures with H_2O, CO_2, CO, NH_3, CH_3OH, H_2O:CO_2, H_2O:NH_3, CO:NH_3, and CO:CH_3OH, were collected at temperatures ranging from 15 - 212 K.
Apparent band strengths and positions of eight IR bands of pure amorphous and crystalline formamide at various temperatures are provided. Three of these bands are identified as potential formamide tracers in observational ice spectra: the overlapping C=O stretch and NH_2 scissor bands at 1700.3 and 1630.4 cm^-1 (5.881 and 6.133 μm), the CH bend at 1388.1 cm^-1 (7.204 μm), and the CN stretch at 1328.1 cm^-1 (7.529 μm). The relative apparent band strengths, positions, and full width half maxima (FWHM) of these features in mixtures at various temperatures were also determined. All of the laboratory spectra are available to the community on the Leiden Ice Database for Astrochemistry (LIDA) for use in the interpretation of both observations (e.g., from JWST) and laboratory spectroscopic data. Finally, the laboratory spectra are compared to observational spectra of a variety of low- and high-mass young stellar objects as well as prestellar cores observed with the Infrared Space Observatory, the Spitzer Space Telescope, and JWST. A comparison between the formamide CH bend in laboratory data and the 7.24 μm band in the observations tentatively indicates that, if formamide ice is contributing significantly to the observed absorption, it is more likely in a polar matrix. Upper limits ranging from 0.35-5.1% with respect to H_2O were calculated via scaling the formamide:H_2O laboratory spectrum to the observations. These upper limits are in agreement with gas-phase formamide abundances and take into account the effect of a H_2O matrix on formamide's band strengths.
The hunt for formamide in interstellar ices
K. Slavicinska1,2
M. G. Rachid1
W. R. M. Rocha1,2
K. -J. Chuang1
E. F. van Dishoeck2,3
H. Linnartz1
Received 24 May 2023 / Accepted 30 June 2023
==========================================================================================================================
§ INTRODUCTION
Of the >280 molecules that have been detected in interstellar environments <cit.>, formamide (NH_2CHO) has become one of the most widely and deeply investigated in observational, modeling, computational, and laboratory studies in the last decade. Containing all four of the most abundant biological elements (C, H, N, and O), formamide is the simplest molecule that contains the biologically essential amide bond and has been suggested as a plausible prebiotic precursor to various nucleobases (e.g., ), the chemical building blocks of RNA and DNA. It has also been proposed as an alternative prebiotic solvent to promote condensation reactions, which form many vital biological molecules but are highly endergonic in purely aqueous solutions (e.g., phosphorylation), by lowering water activity <cit.>.
Given this potential prebiotic relevance, the fact that formamide has been observed in numerous sources in the interstellar medium as well as on extraterrestrial bodies in our own Solar System has exciting implications for astrobiology. First detected in the interstellar medium in the gas phase by <cit.> in the Sagittarius B2 high-mass star-forming region, formamide has since been observed in over 30 massive young stellar objects (MYSOs) as well as low-mass YSOs (LYSOs) with hot corinos and protostellar shocks ( and references therein). Within our Solar System, gas-phase formamide has been found in the comae of the comets Lemmon, Lovejoy, and Hale-Bopp, with abundances ranging around 0.01-0.02% with respect to H_2O <cit.>. It was also detected in situ by the Rosetta mission on comet 67P Churyumov-Gerasimenko, both on the surface by the Cometary Sampling and Composition experiment (COSAC) instrument on the Philae lander <cit.> and in the coma by the Double Focusing Mass Spectrometer (DFMS) on the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) instrument <cit.>, where the formamide abundance was found to be ∼0.004% with respect to H_2O.
Notably, all of the interstellar sources in which gas-phase formamide has been securely detected have hot cores and corinos or shocked regions, where temperatures are high enough for formamide to thermally desorb from icy grains into the gas phase <cit.>. Additionally, in many of these sources, the formamide abundance correlates almost linearly with the abundance of isocyanic acid (HNCO) <cit.>, and, in the case of the low-mass source IRAS 16293-2422, the two species are spatially correlated and have very similar deuteration ratios <cit.>.
These aspects of formamide observations could be considered evidence that formamide is formed in the solid state (i.e., via ice chemistry), possibly in a pathway chemically related to HNCO, and it is detected in the gas phase following desorption from icy grains. The ice formation and grain sublimation scenario is further supported by recent observational work investigating excitation temperatures of N-bearing complex organic molecules (COMs) in 37 MYSOs from the ALMA Evolutionary study of High Mass Protocluster Formation in the Galaxy (ALMAGAL) survey, where formamide had the highest excitation temperatures of all the studied N-bearing COMs (≳250 K) <cit.>. These temperatures are consistent with thermal desorption experiments, in which formamide ice sublimes at high temperatures (typically >210 K) even when it is mixed with or deposited on top of more volatile species such as H_2O and CO, and at even higher temperatures (>250 K) when the experiments are performed on certain dust grain analog substrates <cit.>.
Experimentally, solid-state formamide has been identified as a product of processing via a variety of energetic sources (e.g., electron, UV, X-ray, and ion irradiation) of a myriad of simple ice mixtures, including (but not limited to) CO:NH_3 <cit.> and H_2O:CO:NH_3 <cit.>, NO:H_2CO:H and NO:CH_3OH:H <cit.>, H_2O:HCN <cit.>, H_2O:CH_4:NH_3 and H_2O:CH_4:N_2 <cit.>, and HNCO <cit.> and CH_4:HNCO <cit.>. Evidently, energetic processing of almost any ice mixture that contains H, N, C, and O is very likely to produce formamide. Such processing experiments mimic the radiation environments experienced by ices in protostellar envelopes and protoplanetary disks. Furthermore, recent experiments by <cit.> demonstrate that hydrogenation of NO:H_2CO can also produce formamide, providing a plausible nonenergetic formation pathway that is relevant to cold, dark clouds.
While a plethora of observational, experimental, and theoretical works (see Section <ref>) have significantly progressed our understanding of formamide's interstellar presence and its plausible chemical history, whether its formation occurs in the solid state, gas phase, or both remains unclear. A secure detection of formamide in ices would be immensely valuable to resolve this debate regarding its formation mechanism. Such a detection, if well resolved, could provide parameters such as formamide's solid-state abundance and its physico-chemical environment, which are essential to elucidating its formation pathway.
Previously, formamide has been tentatively detected in the solid state in the Infrared Space Observatory Short Wavelength Spectrometer (ISO-SWS) spectra of the MYSOs W33A and NGC 7538 IRS 9. In the case of W33A, an upper limit of 2.1% with respect to H_2O via the CH bend at 7.22 μm/1385 cm^-1 was derived, but the authors noted that the peak position in the observation (7.24 μm) was red-shifted relative to the formamide peak in their laboratory spectra <cit.>. For NGC 7538 IRS 9, no upper limit of formamide was provided – a laboratory spectrum of irradiated HNCO that showed IR evidence of formamide formation was qualitatively evaluated as a spectral fit to the observed 6 μm/1700 cm^-1 band <cit.>. In both of these cases, the bands attributed to formamide were overlaid on top of or blended with other strong ice features.
Typically, reference laboratory IR spectra are used to assign and fit astronomically observed IR features to specific species, and band strengths acquired via systematic laboratory experiments are used to quantify the column densities of these species. For COMs such as formamide that are expected to be present in the ice in very low concentrations (≲5%), it is important to obtain these spectra and band strengths not only for pure ices, but also in chemical conditions that are more realistic for interstellar ices. Namely, the molecule of interest should be diluted in the more abundant simple ice species (e.g., H_2O, CO, and CO_2), as interactions with other species present in the ice matrix can significantly alter the positions, profiles, and apparent band strengths of a molecule's vibrational features. Morphological changes in the ice caused by thermal processing, such as transitions from amorphous to crystalline ice or matrix segregation, can also dramatically change an ice's spectral features, so spectra should be collected at a variety of temperatures as well. Considering such factors is not only important to accurately assign and quantify the molecule of interest, but it can also provide valuable information about the molecule's physico-chemical environment and history.
In previous IR characterization work, <cit.> derived the refractive index, density, and several band strengths of pure formamide, but integration ranges and errors were not provided for these band strengths, and no spectra of heated formamide or formamide in mixtures were collected. In order to tentatively assign the 7.24 μm band in W33A's spectrum to formamide, <cit.> collected spectra of formamide at 10 K in H_2O and H_2O:CH_3OH matrices, but only one band was characterized from these spectra, and it is unclear for what phase of formamide the band strength used in the upper limit calculation was derived. <cit.> collected IR spectra of formamide in pure, H_2O-dominated, and CO-dominated ice matrices, but the band strengths, peak positions, and full width half maxima (FWHMs) of the formamide features in these mixtures are not given. <cit.> presented the peak positions of the bands of pure formamide in the 30 - 210 K temperature range, but no spectra of formamide in mixtures were collected.
Thus, in an effort to enable more secure assignments and accurate abundance and/or upper limit determinations of formamide in observed ice spectra, this work provides a comprehensive set of laboratory transmission IR spectra of pure formamide as well as formamide diluted in nine different astrophysically relevant ice mixtures of varying polarities. These spectra are provided at temperatures ranging from 15 - 212 K. Apparent band strengths were derived for eight integrated regions from the pure formamide spectra, and from these, three bands are evaluated as the most promising for future identification of formamide in observations. These bands are also fully characterized (i.e., peak positions, FWHMs, and relative band strengths are provided). Examples of how these spectra and values can be used in future analyses of ice observations are described, and new upper limits of formamide in a variety of objects (prestellar cores, low-mass protostars, and high-mass protostars) were calculated. Finally, all spectra are made publicly available on the Leiden Ice Database[https://www.icedb.strw.leidenuniv.nl] <cit.> for the community to use in fitting to their ice observations. This work is particularly timely given the recent launch of the James Webb Space Telescope (JWST), which may enable the detection of new COMs in interstellar ices due to its unprecedented sensitivity and spectral resolution.
§ FORMAMIDE FORMATION MECHANISM DEBATE
A variety of pathways have been suggested to explain the observed solid-state formamide formation in laboratory ice experiments. One initially proposed mechanism was the hydrogenation of HNCO, an attractive premise given that it provided a direct chemical link between HNCO and formamide to explain their correlation in gas-phase observations:
HNCO + 2^.H →NH2CHO.
This pathway was first suggested by <cit.> and was stated as a possible formation mechanism of formamide when it was observed in VUV irradiation experiments of pure HNCO <cit.>. However, hydrogenation experiments by <cit.> via H bombardment of HNCO <20 K did not produce detectable amounts of formamide, although the authors suggested that the reaction may be prevented in their experiments by the formation of very stable HNCO dimers or polymers, and that it could possibly proceed if HNCO is diluted in the matrix of an ice like H_2O. Indeed, subsequent experiments by <cit.> showed that, in a 3.3 K para-H_2 matrix, formamide can form from HNCO via a hydrogen addition-abstraction cycling mechanism, but in this reaction scheme, HNCO is still the favored product.
Another proposed formation pathway is the following radical-radical recombination:
^.NH2 + ^.CHO →NH2CHO.
This mechanism is technically barrierless and can proceed at low temperatures (∼10 K) but produces higher yields at higher temperatures (∼20-40 K) due to increased mobility allowing the radicals to orient in the proper reaction geometry <cit.>. In the laboratory, this mechanism requires some form of energetic processing to generate the NH_2 and CHO radicals, and its viability is supported by the presence of the CHO radical in the experimental spectra <cit.>.
Various mechanisms have also been suggested where formamide is produced from the NH_2CO radical, which could form by the radical-molecule association of NH_2 and CO or CN and H_2O <cit.>:
^.NH2CO + ^.H →NH2CHO
^.NH2CO + H2O →NH2CHO + ^.OH
2^.NH2CO →NH2CHO + HNCO.
However, the formation of the NH_2CO radical via a pathway that does not involve hydrogen abstraction from already existing formamide, as seen in <cit.>, has yet to be experimentally confirmed.
While these latter mechanisms do not provide an immediately obvious direct solid-state link between HNCO and NH_2CHO, some experimental studies have suggested alternative links consistent with these mechanisms. For example, once formed, formamide can decompose into HNCO via dehydrogenation and photolysis by H_2 loss <cit.>, so HNCO may be a product of NH_2CHO rather than the other way around. <cit.> proposed that the NH_2 radical can produce either HNCO or NH_2CHO depending on the degree of hydrogenation of the C- and O-containing molecule with which it reacts: the reaction of NH_2 with CO leads to HNCO, while NH_2 with HCO or H_2CO leads to formamide.
Thus, while formamide may not be a direct product of HNCO, the two species may be linked in a solid-state chemical network by common precursors. Astrochemical models using the rate constants from <cit.> further corroborate that, indeed, a direct chemical link between HNCO and NH_2CHO is not necessary to reproduce the observed linear correlation between them in models of various interstellar environments and suggest instead that their correlation could be explained by their similar responses to physical (i.e., thermal) environments <cit.>.
In addition to these solid-state mechanisms, the plausibility of the following gas-phase formation route has been extensively debated in computational and modeling works since its proposal in <cit.>:
^.NH2 + H2CO →NH2CHO + ^.H.
According to its first published electronic structure and kinetic calculations, this reaction is essentially barrierless at low temperatures and thus should proceed readily in interstellar environments <cit.>. Furthermore, chemical models of the protostar IRAS 16293-2422 and the molecular shocks L1157-B1 and B2 utilizing the calculated rate coefficients of this reaction produce formamide abundances that are consistent with observed values <cit.>, and follow-up studies calculating rate coefficients of deuterated formamide formation via the same reaction show that formamide's observed deuteration ratio does not necessarily exclude the possibility of gas-phase formation <cit.>.
However, the accuracy of these calculated rate coefficients has been called into question given that they neglect the zero point energy (ZPE) of one of the transition states. When the ZPE of the transition state is included, the reaction barrier becomes large enough that the reaction rate is negligible at low temperatures <cit.>, although some argue that inclusion of the ZPE is not warranted for this transition state and results in overestimation of the reaction barrier <cit.>. Recent gas-phase experiments attempting to perform this route did not confirm any formamide formation, and their detection upper limits are consistent with the reaction barrier that includes the transition state ZPE <cit.>.
§ METHODOLOGY
All of the measurements were collected in the Laboratory for Astrophysics at Leiden Observatory on the IRASIS (InfraRed Absorption Setup for Ice Spectroscopy) chamber. The setup was described in detail in <cit.> and <cit.>, and it has since undergone several upgrades, including a decrease of its base pressure to <1.0×10^-9 mbar by the addition of new pumps, an exchange of the laser used for interference measurements to one with a wavelength of 543 nm (as the formamide ice refractive index was measured by at this wavelength), and the implementation of an independent tri-dosing leak valve system that can be calibrated with a quadrupole mass spectrometer (QMS) following the procedure described in Appendix <ref>.
The optical layout of the chamber remains the same as that shown in Figure 1 in <cit.>: a Ge substrate sits at the center of the chamber and is cooled by a closed-cycle He cryostat to 15 K. Ices are grown on the substrate via background deposition of gases and vapors dosed into the chamber through leak valves. Infrared transmission spectra are collected through two ZnSe viewports that are parallel to the Ge substrate and normal to the IR light beam. During deposition, laser interference patterns used to determine ice thickness are measured on both sides of the Ge substrate (which is opaque and reflective in the visible light range) via photodiode detectors placed outside of viewports positioned 45^∘ from the substrate normal. The patterns obtained from each side of the substrate during deposition show equal deposition rates on both sides. After deposition, the substrate can be heated to obtain IR spectra at different temperatures. In this work, 256 spectral scans with a 0.5 cm^-1 resolution were collected and averaged while the substrate was heated at a rate of 25 K hr^-1, resulting in a temperature uncertainty of ±1.5 K in each heated spectrum. Spectra were collected during heating until reaching the temperature at which the major matrix component desorbed. Before their analysis, all spectra were baseline-corrected using a cubic spline function.
The liquids and gases used in this work were formamide (Sigma Aldrich, ≥99.5%), water (Milli-Q, Type I), carbon dioxide (Linde, ≥99.995%), carbon monoxide (Linde, ≥99.997%), ammonia (PraxAir, ≥99.96%), and methanol (Sigma Aldrich, ≥99.9%). The mixing ratios calculated for all of the spectra via the method outlined in Appendix <ref> are presented in Table <ref>. Uncertainties in the column densities used to calculate these ratios are estimated to be ∼21% for the formamide column densities and ∼27% for the matrix species column densities (see Appendix <ref>). Prior to deposition, the liquid formamide sample was heated to 60^∘C and pumped on directly with a turbomolecular pump in order to remove contaminants (primarily water).
The apparent band strengths of pure formamide are determined via depositing formamide onto the substrate held at 15 K while simultaneously collecting the transmission IR spectra and the laser interference pattern. The thickness d of the ice can be derived from the laser interference pattern via the following equation:
d = mλ/2√(n^2 - sin^2θ),
where m is an integer number of constructive fringes, λ is the laser wavelength, n is the ice refractive index (1.361 for formamide at 543 nm, from ), and θ is the angle of incidence.
Enough formamide is deposited so that four constructive fringes are acquired, the thickness of the ice at each fringe peak is calculated, and the integrated absorbances of eight spectral regions (see Table <ref>) are calculated from the spectra collected at the time that a fringe peak was reached. Then, the integrated absorbance for each spectral region is plotted as a function of ice thickness, and the slope of this line, Δ∫ abs(ν) dν/Δ d, is obtained via a least-squares fit. From this value, the apparent band strengths A' can be approximated with an equation based on the Beer-Lambert Law (e.g., ):
A' = 2.303 M/ρ N_A×Δ∫ abs(ν) dν/Δ d,
where M is the molar mass of formamide (45.041 g mol^-1), ρ is the density of formamide ice (0.937 g cm^-3, from ), and N_A is Avogadro's number. Using change in integrated absorbance over change in thickness in this equation rather than the absolute values of both variables ensures that there is no contribution of any residue from previous experiments on the substrate to the calculated ice thickness. It also does not require a constant ice growth rate.
The apparent band strengths reported in Table <ref> are the averages of three repeated measurements following this method. The experimental uncertainties derived from the standard deviation of these three measurements range from 3-8% for the eight band strengths. However, simply using the standard deviations from the repeated measurements as the band strengths uncertainties neglects potential systemic sources of error such as uncertainties in the laser alignment geometry and the data analysis procedure. Thus, the uncertainties provided in Table <ref> are calculated via error propagation of all of the experimental terms in Equation <ref>, using the same estimated uncertainties as <cit.> for the ice thickness (4%) and integrated absorbance (10%) as well as the ice density (10%). This calculation yields an uncertainty of 15% for the reported band strength values.
From the pure formamide apparent band strengths, the apparent band strengths of formamide in the investigated mixtures, A'_i, are calculated using the formamide column densities N_mix (obtained from the methods described in Appendix <ref>) via the following equation:
A'_i = 2.303 ×∫ abs(ν) dν/N_mix,
and the relative apparent band strengths, η, are subsequently found by:
η = A'_i/A',
Following propagation of error from the pure apparent band strengths, integrated absorbances, and the formamide column densities in the mixtures (see Appendix <ref>), the uncertainties of the relative apparent band strengths presented here are estimated to be ∼28%.
§ RESULTS
The spectra of pure amorphous and crystalline formamide are presented in Figure <ref>, and the eight apparent band strengths calculated at 15 K are presented in Table <ref>. Peak positions and vibrational mode assignments are also provided. Some integrated regions contain multiple overlapping peaks; in these cases, the peak positions and assignments were provided for all peaks within the integrated region, but the peaks were not deconvolved to give an individual band strength for each peak. These band strengths have percent differences ranging from 1-35% compared to those given for the same peak values in <cit.>. As integration bounds were not provided by <cit.>, any discrepancies in band strengths may be caused by differences in chosen integration regions.
The transition from amorphous to crystalline formamide is observed at 170 K, indicated by its bands becoming sharper and narrower and some peaks splitting. The amorphous nature of almost all of the pure and mixed ices collected at 15 K can be ascertained from their spectra, which have typical amorphous features that show evidence of matrix crystallization during the warm-up phase of the experiments. This excludes the mixtures containing CO, whose phase at 15 K in these experiments may be crystalline given recent investigations of CO ice structure ≥10 K <cit.>.
Figure <ref> presents the spectrum of pure formamide ice along with the spectra of the pure matrix components, all at 15 K. The formamide peaks indicated in the shaded areas were selected for full characterization (i.e., their peak positions, FWHMs, and relative band strengths are determined for mixtures): the overlapping C=O stretch and NH_2 scissor at 1700.3 cm^-1/5.881 μm and 1630.4 cm^-1/6.133 μm, respectively, and the slightly overlapping CH bend and CN stretch at 1388.2 cm^-1/7.204 μm and 1328.1 cm^-1/7.529 μm, respectively. These peaks were selected because they are strong, have sharp profiles, and overlap the least with the major peaks of the most common interstellar ices, making them the best candidates for identifying formamide in interstellar ice spectra. There is still some overlap between these formamide peaks and some minor peaks of the matrix components, namely the water OH bend at ∼1600 cm^-1/6.25 μm, the methanol CH_3 and OH bends at ∼1460 cm^-1/6.85 μm, and the ammonia NH scissoring at 1624 cm^-1/6.16 μm. However, with sufficiently high formamide concentrations, it may still be possible to identify formamide in these spectral regions, as these matrix bands are relatively weak and broad.
The matrix- and temperature-dependent changes in these selected formamide ice bands are discussed in the following subsections, and their peak positions, FWHMs, and relative band strengths in different mixtures at various temperatures are reported in Appendices <ref> and <ref>. The NH_2 stretching features at 3371.2 cm^-1/2.966 μm and 3176.4 cm^-1/3.148 μm and the NH_2 wagging and twisting features at 689.2 cm^-1/14.510 μm and 634.0 cm^-1/15.773 μm were excluded from further characterization despite their relatively large band strengths due to their direct overlap with the two most intense water features, the OH stretch at ∼3279 cm^-1/3.05 μm and the H_2O libration at ∼780 cm^-1/12.8 μm, respectively <cit.>. The remaining formamide bands, the CH stretch at 2881.9 cm^-1/3.470 μm, the CH bend overtone at 2797.7 cm^-1/3.574 μm, and the convolved NH_2 rock at 1108.1 cm^-1/9.024 μm and CH out-of-plane deformation at 1056.1 cm^-1/9.469 μm, have low band strengths and directly overlap with various methanol features: the CH_3 stretches at 2950 cm^-1/3.389 μm and 2830 cm^-1/3.533 μm, the CH_3 rock at 1126 cm^-1/8.881 μm, and the C-O stretch at 1027 cm^-1/9.737 μm <cit.>.
§.§ C=O stretching and NH_2 scissoring features (∼1700 and 1630 cm^-1)
Figure <ref> shows how the profile of the C=O band (1700.3 cm^-1/5.881 μm) changes in different mixtures and temperatures and presents the peak positions and FWHMs of these spectra in a scatter plot. This type of scatter plot can help to narrow down the possible thermochemical environments of molecules identified in observations (see Section <ref>). The right bottom plot in the figure shows the strengths of the band in the different mixtures at 15 K relative to the value of the band strength of pure formamide. The integrated regions used to calculate these band strengths also include the NH_2 scissoring mode, which presents as a weak, broad feature overlapping with the red shoulder of the C=O stretch (see Figure <ref>). The FWHM and relative band strengths of the formamide:NH_3 mixture are excluded from the bottom scatter plots in Figure <ref> and the tables in Appendix <ref> due to the significant overlap of this band with ammonia's NH scissoring mode at 1624 cm^-1/6.16 μm. The NH_3 peak is small enough in the NH_3-containing tertiary mixtures relative to the formamide C=O stretch to extract reliable peak positions and FWHMs, but relative band strengths were not calculated.
In pure amorphous formamide (<170 K), the C=O stretch appears as a single broad peak centered at 1704.2 cm^-1/5.868 μm. Generally, being in a mixture causes the feature to sharpen, most dramatically so in apolar mixtures in which CO or CO_2 are the dominant species. For example, the FWHM of the feature in formamide:CO_2 at 15 K is 51.1 cm^-1, over three times narrower than that in pure formamide. Also, in the CO, CO:CH_3OH, and crystalline CO_2 matrices, some peak splitting occurs before the formamide crystallization temperature is reached. Such sharpening and splitting is typical when a polar molecule is highly diluted in an apolar matrix and is caused by the polar molecule being isolated in the matrix as a monomer or dimer, unable to form the hydrogen bonds with other polar molecules that tend to broaden and blend vibrational features (e.g., ). <cit.> also previously observed the formamide peaks splitting due to monomer and dimer formation in their very dilute 1:40 formamide:CO mixture. In the polar mixtures, however, as hydrogen bonding with the matrix is still possible, the feature remains broad. The feature is the most blue-shifted in the binary CO and CO_2 mixtures, where its peak values are 1717.2 and 1703.7 cm^-1, respectively, in the 15 K ices, while in polar mixtures it tends to red-shift, with the most red-shifted peak position being that of the tertiary H_2O:CO_2 mixture, 1694.0 cm^-1. Despite containing a high fraction of apolar CO, the tertiary mixtures with CO:CH_3OH and CO:NH_3 have peak positions similar to the polar mixtures. The relative band strength of this formamide feature is >1 in all of the investigated matrices, with no observable trend related to polarity present in these values.
At formamide's crystalline phase transition temperature (170 K), the C=O peak blue-shifts and splits into multiple blended features. This is only observed in the pure formamide spectrum because all of the matrix molecules investigated here desorb below 170 K. An interesting trend to note is that, as the mixtures increase in temperature, the formamide C=O feature tends to broaden to have a FWHM value more similar to that of pure formamide. This trend can be easily identified in the scatter plot in Figure <ref>, where the scatter points of several of the mixtures move closer to the points of the pure amorphous spectrum as temperature increases. It is also particularly noticeable in Figure <ref> in the spectra of mixtures containing H_2O, which have peak position and FWHM values at high temperatures (>150 K) that are the close to those of the pure spectrum. Sudden broadening of the FWHM to a value closer to that of pure formamide also tends to occur at the matrix crystallization temperatures (for example, in the binary CO_2 mixture between ∼30 and 40 K and in the H_2O-containing mixtures between ∼130 and 150 K). These spectral changes indicate that formamide segregation is occurring in the matrix as the ice is heated and is particularly promoted when the ice undergoes a dramatic restructuring during matrix crystallization. The conclusion that solid-phase formamide diluted in a matrix is mobilized via heating is consistent with formamide thermal processing studies, in which formamide deposited on top of water ice diffused through the water during heating <cit.>.
§.§ CH bending and CN stretching features (∼1388 and 1328 cm^-1)
The shape and position of the CH bend (1388.1 cm^-1/7.204 μm) does not vary much depending on chemical environment or temperature, with peak positions only ranging from 1398.0 - 1387.2 cm^-1 and FWHM values ranging from 11.1 - 27.5 cm^-1 in the mixtures investigated here (see Figures <ref> and <ref>). As in the C=O stretch band, the binary apolar mixtures with CO and CO_2 have the most blue-shifted and narrow peaks; however, a trend of the mixture band shifting during heating to peak position and FWHM values closer to those of the pure band is not as clear. The band strength of the CH bend increases in all of the mixtures (e.g., η=1.63 at 15 K in the formamide:H_2O mixture) except for the CO_2 mixture, in which the band strength decreases slightly (η=0.85 at 15 K).
The CN stretching band (1328.1 cm^-1/7.529 μm) varies much more dramatically across different mixtures and temperatures (see Figures <ref> and <ref>), particularly in the binary apolar mixtures, in which it red-shifts by up to ∼50 cm^-1 and splits into multiple convolved features. In the formamide:CO_2 spectrum, two peaks are present at 15 K at 1316.8 and 1277.0 cm^-1, with the peak at 1277.0 cm^-1 having a greater intensity until 40 K, at which point the intensity of the 1316.8 cm^-1 peak increases and that of the 1277.0 cm^-1 peak decreases. The 1277.0 cm^-1 peak intensity then continues to decrease during heating until CO_2 sublimates at 90 K (see Figure <ref>). This trend is indicative of the 1277.0 cm^-1 peak belonging to the formamide monomer and the 1316.8 cm^-1 peak belonging to the formamide dimer, as it would be expected for the monomer peak to decrease and the dimer peak to increase if segregation occurs during heating, especially during a major ice structure rearrangement like matrix crystallization, which occurs for CO_2 at 40 K. Such assignments are consistent with the assignments in <cit.>, who observed the formamide monomer and dimer in a xenon matrix at 1267.2 and 1305.4 cm^-1, respectively, and supported their assignments with computations. The peak in the formamide:CO spectrum also has a red component that appears to decrease in intensity during heating, but the monomer and dimer peaks are not as clearly distinguishable as more than two peaks appear to be overlapping in that spectrum. In the mixtures containing other polar molecules, the band is generally blue-shifted, broadened, and decreases in intensity relative to the CH bend. The relative strength of the band is close to 1 in most of the characterized polar mixtures, except for the H_2O:CO_2 mixture, which has a relative band strength of 0.75 at 15 K. In contrast, the relative band strength is closer to 2 in all of the primarily apolar mixtures.
While the CN stretch clearly has more potential than the CH bend as a diagnostic of the chemical environment of formamide, it is also much broader and less intense in most of the mixture spectra than in the pure spectra. This diminishes the ability to identify this band in a spectral region where several other astronomically relevant COMs also have features (see Section <ref>).
§ ASTRONOMICAL IMPLICATIONS
The ability of formamide to form via both atom addition and energetic processing in a variety of ices containing C, H, N, and O means that its solid-state presence is plausible in many interstellar environments, ranging from dark interstellar clouds to protoplanetary disks. However, in order to securely detect it, an absorption with a clear peak position and profile that is distinguishable from other ice features in the same spectral region must be identified.
The C=O stretch is amorphous formamide's strongest and sharpest feature, but it overlaps with the blue wing of the strong and broad 6.0 μm feature present in most interstellar ice spectra. Water and ammonia, which have been securely identified in ices, as well as formic acid and formaldehyde, which have been tentatively identified, have features in this spectral region <cit.>. Additionally, many other carbonyl group-containing COMs that have been detected in the gas-phase and may be present in the solid state, like acetaldehyde, acetone, and methyl formate, also have strong absorptions in this wavelength region <cit.>. While this limits the potential of using formamide's C=O band as its primary means of identification, the band can still be used for performing fits spanning a wider wavelength region in combination with other bands.
The CH bend and the CN stretch are medium-strength features that lie in the "COM-rich region" of interstellar ice spectra between 7-8 μm <cit.>. This region, where many organic functional groups have absorptions, sits on the tail of the strong 6.85 μm band (whose assignment remains uncertain but likely contains absorptions by methanol and the ammonium cation, ). The methane CH bending band at 7.68 μm is the most clearly and frequently observed ice band in this region <cit.>, but additional weaker features at 7.03, 7.24, 7.41, and 8.01 μm are also consistently observed toward some sources (Figure <ref>). Candidate carriers suggested for some of these absorptions include species like formic acid, ethanol, acetaldehyde, the formate anion, and, potentially, formamide <cit.>.
As mentioned previously, <cit.> tentatively assigned formamide as a plausible contributor to the 7.24 μm band in W33A using a formamide:H_2O spectrum and calculated a formamide ice upper limit of 2.1% with respect to H_2O, although they pointed out that in their lab data, the formamide peak position was blue-shifted by 0.02 μm relative to the observed band, and that an assignment to the CH bend of formic acid (HCOOH) may be more appropriate. Ethanol (CH_3CH_2OH) and the formate anion (HCOO^-) have also been considered candidates for this band <cit.>. No distinct and consistently observed bands are located at the peak position of the formamide CN stretch at ∼7.5 μm. However, in mixtures (particularly those with polar components), the intensity and sharpness of this band weaken (relative to the intensity and sharpness of the CH bend). Such a profile change makes a distinction of the CN stretch from the continuum in this region less feasible if formamide is present at the low ice abundances expected for COMs, especially given that around this wavelength, many sources also show a broad and significant absorption commonly attributed to SO_2 ice <cit.>. On the other hand, the CH bend remains strong and sharp in all of the mixtures investigated here. All of the other absorption features of formamide either have profiles that are too broad or weak, or overlap directly with the strongest absorptions of the major ice components (see Figure <ref>), and will therefore not be utilized in our hunt for formamide ice.
Thus, if formamide is indeed present in interstellar ices, the CH bend is likely its best tracer. We focus our subsequent analysis on the comparison of the formamide CH bend in mixtures to the observed 7.24 μm band in nine spectra collected toward a variety of sources by ISO, Spitzer, and the recently launched JWST (Figure <ref>). The ISO (SWS) spectra include three massive young stellar objects (MYSOs), W33A, NGC 7538 IRS 9, and AFGL 7009s, and the Spitzer (IRS) spectra include three low-mass young stellar objects (LYSOs), B1c, 2MASS J17112317, and RNO 91. These archival spectra were selected due to their 7-8 μm regions having several deep and distinct features, indicating that they may be COM-rich, and because their profiles in this region slightly differ, demonstrating the variety of spectral features that have been observed here. In addition, three spectra recently collected by the JWST have been included: two pristine, high-extinction dark clouds toward background stars, NIR38 and J110621, observed with the Mid-InfraRed Instrument (MIRI) Low-Resolution Spectrometer (LRS) <cit.> in the ERS program Ice Age (program 1309, PI: M. McClure), and a Class 0/I low-mass protostar, L1527, observed with the MIRI Medium-Resolution Spectrometer (MRS) in the GTO program JOYS (program 1290, PI: E. F. van Dishoeck, ). These are some of the first spectra ever collected of such low-flux sources. While the resolution of the ISO data is comparable to that of the JWST data, the resolution of the Spitzer data is significantly lower (R∼60-100), limiting its use in the analysis of weak and narrow bands.
The 7.24 μm band is present to some extent in all of the sources, usually at an optical depth similar to the 7.41 μm band in the local continuum-subtracted spectra. The position and FWHM of the band were extracted from the spectra that have spectral resolutions high enough to clearly define the shape and position of the peak – that is, the ISO-SWS and JWST MIRI-MRS MYSO spectra – by fitting a Gaussian profile to the peak. Figure <ref> shows these observed peak positions and FWHMs (indicated with star shapes) in a scatter plot with the peak positions and FWHMs of the CH bend extracted from the laboratory spectra. The peak positions and FWHMs extracted from laboratory spectra of ethanol in a H_2O mixture <cit.>, formic acid in a H_2O:CH_3OH mixture <cit.>, and ammonium formate in a H_2O mixture at 150 K <cit.> are also included in this figure (indicated with the letters E, F, and H respectively) to enable a comparison between formamide and the other commonly proposed carriers. From this plot, it is evident that, while the polar mixtures have the band position and profile closest to the observations, they are all still too blue-shifted (by ∼7 cm^-1/0.04 μm) from the astronomical values for formamide to be the major carrier of this band. In contrast, ethanol, formic acid, and the formate anion in polar mixtures are much better candidates.
It is still possible that formamide could be contributing to the blue wing of this band. However, to result in non-negligible upper limits, the formamide must be present in a matrix containing other polar molecules, as the band is far too blue-shifted in the purely apolar mixtures to contribute significantly to the observed absorption. Therefore, we derived upper limits of formamide by fitting the CH bend in the laboratory spectrum of the formamide:H_2O mixture at 15 K to the 7.24 μm band in the local continuum-subtracted observed spectra (see example fits in Figure <ref>). The water mixture was chosen for the fit for simplicity's sake and due to the fact that water is by far the most abundant interstellar ice component. The water contribution was subtracted out of the laboratory ice spectrum using a spectrum of pure water ice to ensure that absorption by the broad water bending band did not contribute to the calculated formamide upper limit. The band strength used to perform the upper limit calculation was 1.5×10^-17 cm molec^-1, the band strength of the CH bend in pure formamide at 15 K (from Table <ref>) multiplied by the relative band strength of formamide in H_2O at 15 K (1.63, from Appendix <ref>).
When deriving upper limits, it is prudent to ensure that the laboratory spectrum fits to the observed spectrum across a wider wavelength range, as upper limits can be easily overestimated if only one band is considered. Subtracting out the contributions of other ices that absorb in the analyzed spectral region, if their abundances can be unambiguously determined from other spectral regions, also prevents further upper limit overestimations. Therefore, we ensured that the calculated upper limits in Table <ref> do not result in a C=O stretch absorption that exceeds the observed optical depth of the ∼6 μm band in our selected objects. Prior to checking the C=O absorption in this region, the spectral contribution of water's OH bend ∼1655 cm^-1/6.04 μm was removed from the observed spectra by scaling a laboratory water spectrum at 15 K from <cit.>, so that the water column density of the scaled spectrum was the same as what was previously determined for these objects, and then performing a subtraction. (For the ISO and Spitzer data, the water column densities from <cit.> were used for scaling; for the JWST MIRI-LRS data, the water column densities from <cit.> were used. For the JWST MIRI-MRS spectrum (L1527), the water column density was determined by first subtracting the silicate contribution by fitting the GCS3 spectrum to the 10 μm silicate band and then fitting the laboratory water spectrum from to the water libration band.)
The resulting upper limits of solid-state formamide, presented in column densities as well as with respect to the abundance of water in each source, are presented in Table <ref>. These upper limits (ranging from 0.35-5.1% with respect to H_2O) are all at least an order of magnitude greater than (but consistent with) the observed gas-phase formamide abundances in three comets (0.016-0.021% with respect to H_2O) as well as the average beam dilution-corrected abundance of 22 MYSOs from the ALMAGAL survey (∼0.05% with respect to H_2O, assuming a CH_3OH/H_2O ratio of ∼5%). As a beam dilution-corrected gas-phase formamide abundance has also been obtained for the LYSO B1c (∼0.05%), one of the sources investigated here, it can be directly compared to our solid-state formamide upper limit derived from the object's low-resolution Spitzer data. While our upper limit (≤0.93%) is consistent with this gas-phase abundance, it is an order of magnitude greater. We expect the precision of this upper limit to be further refined by future high-resolution observations of B1c, planned to be observed by MIRI-MRS in the JOYS program.
A formamide upper limit of 2.1% with respect to H_2O was previously derived for W33A in <cit.> by assuming that the entire 7.24 μm band consisted of formamide and using a band strength of 3.2 × 10^-18 cm molec^-1 attributed to <cit.>, where it is unclear for what phase of formamide this band strength was derived. Despite our very different approaches, we have fortuitously arrived at nearly the same upper limit value for W33A (2.2%).
In the higher resolution observational data of MYSOs explored here, the lack of a formamide CH bending feature distinct from other COM absorptions prevents a secure formamide ice detection. However, it is clear from the example upper limit fits shown in Figure <ref> that the profile of the 7.24 μm feature is not uniform across different sources, and several sources, such as NGC 7538 IRS 9, NIR38, and RNO 91, may have a blue wing on this band that spectrally overlaps with the CH bend of formamide. Therefore, it is possible that a more distinct absorption at the expected 7.20 μm will emerge more clearly in sources targeted by future JWST MIRI-MRS observations. The first ice spectra arriving now from MIRI-MRS illuminate a promising future. In the spectrum of the LYSO IRAS 15398-3359 acquired by the JWST CORINOS program (program 2151, PI: Y. -L. Yang, ), the COM features between 7-8 μm previously detected barely above 3σ levels in the spectra in Figure <ref> are beautifully resolved (although a distinct absorption centered at 7.20 μm is not present). More sources known to have strong COM absorptions in this spectral region have been specifically targeted by the JOYS program as well as the JWST proposals "It's COMplicated" (program 1854, PI: M. McClure, ) and "Ice chemical complexity toward the Ophiuchus molecular cloud" (program 1959, PI: W. R. M. Rocha, ), and these sources are scheduled to be observed throughout the remainder of this year. As demonstrated by the examples of spectral analysis of ices in this section, the laboratory spectra from this work can serve as a toolkit for formamide identification in such ice observations.
§ CONCLUSIONS
In an effort to facilitate the hunt for formamide in interstellar ices, laboratory spectra of pure formamide and formamide in various astronomically relevant ice mixtures ranging from temperatures of 15 - 212 K have been collected and made freely available to the astronomical community on the Leiden Ice Database for Astrochemistry (LIDA). The band strengths at 15 K for all pure formamide features between 4000 - 500 cm^-1/2.5 - 20 μm are presented, and the peak positions, FWHMs, and relative apparent band strengths of the three bands identified as the most promising for future formamide detection were extracted from the pure and mixed formamide spectra. These spectra and extracted data were used to assess present and future detectability of ices in various interstellar objects. The primary conclusions drawn from this work are as follows:
* Out of the eight formamide features in the investigated IR spectral region, the C=O stretch (1700.9 cm^-1/5.881 μm), the CH bend (1388.3 cm^-1/7.203 μm), and the CN stretch (1328.0 cm^-1/7.530 μm) are likely to be the most useful for future formamide identification due to their strength, sharp profile, and low overlap with the strongest features of the major ice components, with the CH bending feature being the most promising. The NH_2 stretching features (3371.2 cm^-1/2.966 μm and 3176.4 cm^-1/3.148 μm) and the NH_2 wagging and twisting features (689.2 cm^-1/14.510 μm and 634.0 cm^-1/15.773 μm) directly overlap with strong water absorptions, while the CH stretch (2881.9 cm^-1/3.470 μm), the CH bend overtone (2797.7 cm^-1/3.574 μm), and the convolved NH_2 rock and CH out-of-plane deformation (1108.1 cm^-1/9.024 μm and 1056.1 cm^-1/9.469 μm) have both low band strengths and direct overlap with methanol absorptions, making them less suitable for formamide identification.
* In the mixtures investigated here, the CN stretch is the most affected by ice composition – its peak position varies by up to ∼68 cm^-1 and its FWHM by up to ∼50 cm^-1 across the mixtures investigated here, with peak splitting observed in the apolar mixtures. The C=O stretch can also change significantly, depending on the matrix, by up to ∼27 cm^-1 in peak position and up to ∼40 cm^-1 in FWHM, although peak splitting in the apolar mixtures is not as prominent as in the CN stretch. The CH bend is relatively unaffected by ice composition, with its peak position and FWHM only varying by ∼11 cm^-1 and ∼15 cm^-1, respectively, across the different mixtures. Relative to the pure spectrum, the band strength of the C=O stretch increases in all of the investigated mixtures. The CH bend band strength also increases in all of the mixtures except the binary CO_2 mixture, while a significant increase in the band strength of the CN stretch is only observed in the mixtures dominated by an apolar component.
* Although the polar formamide mixtures provide the closest match to the 7.24 μm band observed toward nine lines of sight (including dense clouds, LYSOs, and MYSOs) with three different space telescopes (ISO, Spitzer, and JWST), none provide a convincing fit, with all having their CH bend peak position approximately 7 cm^-1/0.04 μm too far to the blue from the clearly observed band at 1381 cm^-1/7.24 μm. Instead, formic acid and ethanol mixtures containing H_2O provide a better fit. However, this does not exclude the possibility of formamide being present in these ices. The calculated formamide upper limits in these objects range from 0.35-5.1% with respect to H_2O, which are consistent with gas-phase abundances of formamide in several LYSOs, MYSOs, and comets. The upper limit value derived for W33A, 2.2% with respect to H_2O, is fortuitously in agreement with that derived by <cit.>.
While a more secure formamide detection is not possible with the telescopic data explored in this work, the first ice observations arriving from JWST demonstrate an unprecedented sensitivity and spectral resolution that will enable us in the near future to broaden the search for formamide ice in both objects previously observed by Spitzer, whose analysis is limited by low spectral resolution, as well as newly observed objects that were too dim to be observed by Spitzer or ISO.
This work is supported by funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 101019751 MOLDISK), the Netherlands Research School for Astronomy (NOVA), and the Danish National Research Foundation through the Center of Excellence "InterCat" (Grant agreement no.: DNRF150). The authors acknowledge the Ice Age (program 1309, PI: M. McClure) and JOYS (program 1290, PI: E. F. van Dishoeck) observing programs for the JWST astronomical data used in this work. KS acknowledges Thanja Lamberts and Pooneh Nazari for helpful discussions about the formamide formation mechanism and Sergio Ioppolo for helpful discussions about the QMS calibration methodology.
aa
§ PEAK POSITIONS AND FWHMS OF FORMAMIDE IN PURE AND MIXED ICES
This appendix contains the peak positions and FWHMs of the formamide features selected for complete IR characterization in this work. The values are listed for the formamide features in pure ice as well as in mixtures containing H_2O, CO_2, CO, CH_3OH, and NH_3. The peak position is the wavelength at which the absorption reaches its maximum, and the FWHM is the width of the peak between the half-maximum values on each side. A Savitzky-Golay filter with a second-order polynomial was applied to many of the mixture spectra here before extraction of the peak position and FWHM to eliminate shifts in these values caused by noise. The smoothing windows used ranged from 10-100 depending on level of noise present in each spectrum, and care was taken that these smoothing windows did not warp the shape of any features. Values were extracted until the temperatures at which the major matrix component desorbed were reached.
For formamide features in mixtures where there is direct overlap with weaker matrix component bands (e.g., the C=O stretch in the NH_2CHO:H_2O mixture), the spectrum of the matrix component without formamide collected using identical experimental parameters at the corresponding temperature was scaled to the formamide mixture spectrum via a feature without overlap with formamide features and subtracted prior to peak position and FWHM extraction. These cases are denoted with a ^M. For formamide features in mixtures where the formamide features lie on the tails of bands or on very wide bands without sharp features (e.g., the CH bend and CN stretch in the NH_2CHO:NH_3 mixture), a second-order polynomial was used to perform a local continuum subtraction. These cases are denoted with a ^P. For formamide features in mixtures where overlap with a strong matrix component band was very substantial and difficult to reliably subtract (e.g., the C=O stretch in the NH_2CHO:NH_3 mixture), only peak positions are given. These cases are denoted with a ^N. For formamide features that contain multiple peaks, all peak positions are given, and the FWHM of the strongest peak is given. However, if a weaker peak maximum occurs within the two half maximum values of the stronger peak (e.g., the CN stretch in the NH_2CHO:CO 15 K mixture), it is included in the FWHM. These cases are denoted with a ^B.
§ RELATIVE APPARENT BAND STRENGTHS OF FORMAMIDE IN PURE AND MIXED ICES
This appendix provides the relative apparent band strengths (η) of formamide, calculated via Equation <ref>, where the value of A' used in the calculations is the apparent band strength of the respective band in the pure amorphous formamide ice at 15 K (given in Table <ref>). Thus, the η value of each band in the pure ice at 15 K is unity. The integration ranges used to calculate the integrated absorbances are stated for each mixture individually, as the same integration ranges were not used for all mixtures due to shifting peak positions and FWHMs. Different integration ranges were used to calculate the integrated absorbances of the amorphous and crystalline pure formamide peaks for the same reason.
These η values can be used to calculate column densities or upper limits of formamide in a specific mixture and at a specific temperature by simply multiplying the corresponding relative apparent band strength by the appropriate apparent band strength in Table <ref>.
§ QMS CALIBRATION OF AN INDEPENDENT TRI-DOSING LEAK VALVE SYSTEM AND MIXING RATIO DETERMINATION
§.§ Calibration procedure and mixing ratio determination
The new tri-dosing system mentioned in Section <ref> allows for simultaneous but independent deposition of gases and vapors via three leak valves, each connected to a separate gas line. Compared to our previous method in which gases and vapors were premixed in the desired ice ratio in a gas bulb and then dosed into the chamber through a single valve, the new method allows for codepositing multiple gases and vapors without experimental errors in the ratio caused by mixing gases with different volatilities in a single bulb or dosing gases that may have different flow, pumping, and substrate deposition rates through the same valve. Subsequently, it greatly improves the ability to create mixtures with precisely determined ratios of molecules with low volatilities like formamide, which is challenging in traditional premixing procedures. The benefits of independent multidosing systems were also described for similar systems with two leak valves in <cit.> and <cit.>.
There are several ways to calibrate such a system to ensure a certain ratio of ice components. One such method is calibrating the deposition rate on the substrate to a specific leak valve position with a specific pressure of the gas or vapor of choice in its manifold line. However, because formamide has a very low vapor pressure compared to liquids like H_2O and CH_3OH and tends to stick to and condense in various parts of the line, reproducing a specific line pressure throughout multiple experiments using this method is difficult. Therefore, to conduct a systematic and thorough IR characterization of formamide in a wide variety of ices with precisely constrained mixing ratios, a different method is necessary.
For this purpose, we calibrate molecules' ice deposition rates with the intensity of their mass signals during the deposition with a QMS. In this calibration procedure, a pure molecule is dosed at a constant rate into the chamber, with the substrate cooled to the desired deposition temperature and the IR spectrometer continuously collecting IR spectra, while the QMS continuously collects mass peak intensity values of selected mass-to-charge ratios (m/z) in the selected ion monitoring (SIM) mode. The IR spectrometer is used to measure the ice column density rather than the laser interference because the formamide deposition pressure does not remain stable over the long period of time necessary to generate multiple interference fringes (>18 hours), which is necessary to reliably extract a deposition rate. Conversely, a deposition rate can be extracted from integrated absorbance growth rates (obtained via a least-squares fit to the integrated absorbance over time) in ∼30 mins, during which time the formamide deposition rate remains stable (as indicated by the linearity of the integrated absorbance increase over time). The integrated absorbance growth rate for that molecule can then be correlated to a specific mass peak's signal intensity (typically the molecule's base peak) in the QMS (obtained via averaging the mass peak's signal intensity values collected during the deposition and simultaneous IR data collection). The integrated absorbance growth rate can then be converted to the ice column density growth rate, dN/dt, via the following equation if the band strength of the pure molecule, A, is known:
dN/dt = 2.303/A×d∫ abs(ν) dν/dt
.
Table <ref> provides the peak used for the calibration of each pure molecule and its corresponding band strength and reference.
Via this method, a calibration curve relating a mass peak's signal intensity in the QMS to its column density growth rate can be determined, with the slope of this curve referred to here as a molecule's sensitivity (see Figure <ref> for an example of such a calibration). When starting a deposition, the leak valve can then be opened accordingly so that the mass signal of the molecule in the QMS corresponds to the desired column density growth rate. In this work, such calibration curves were completed for all molecules used in these spectra with a Spectra Microvision Plus QMS. The relationship between column density growth rate and QMS signal intensity is linear for all molecules within the deposition pressure ranges used (R^2 values of the linear fits ranged from 0.9699-0.9999 with an average of 0.9936).
After the experiment, the mass signal data during the deposition can be converted via the equation from the calibration curve to a column density growth rate, which is then integrated over time to give the absolute column density of each species at the end of the deposition. However, in the case that some of the species in a given mixture share their strongest mass peaks and have no alternative strong peaks without overlap with the other mixture components (which is the case for several mixtures in this work), the individual column density growth rates must be extracted from the mass spectra by utilizing ratios of a given molecule's base peak to another mass peak that is not shared with any other molecules in a given mixture. For example, the mass spectrum of formamide contains a peak at 28, the base peak of CO. Thus, during the deposition of the formamide:CO mixture, the 28 m/z signal contains contribution from both formamide and CO. The contribution of formamide to the signal at 28 m/z was calculated by dividing the signal at 45 m/z (which, in this mixture, only formamide contributed to) by the ratio of the 45 and 28 m/z peaks during pure formamide deposition. This calculated contribution was then subtracted from the 28 m/z signal to yield the CO 28 m/z signal.
In order to estimate the error of the calculated column density of each component and, subsequently, the mixing ratios in each ice, multiple sources of error have been considered. These are discussed in the following subsections.
§.§ Ion interference effect
Ion interactions within the instrument, such as ion-molecule interactions or ions interacting with the QMS filament or rods, during the dosing of multiple species into the chamber can effect a molecule's sensitivity. Such interactions between two different species can cause their sensitivities to deviate from the values determined in the calibration of each species in pure form. This phenomenon is often referred to as the ion interference effect, and it complicates using a mass spectrometer to quantify gases or vapors in a mixture <cit.>.
The magnitude of this effect is highly dependent on the species as well as the instrument. It increases with total pressure and decreases for a given species as its proportion in a mixture increases <cit.>. Thus, the sensitivities that are most affected by this effect are those of species that are present in the lowest proportions in a mixture. Given that our formamide dosing pressure was in the range of a couple 10^-9 mbar and that the intended ratio of formamide to matrix components was ∼5:100 in the case of binary mixtures and 5:100:25 in the case of tertiary mixtures, we treated the interference effect of formamide on the matrix components as negligible and accounted for ion interference only in the formamide signal. While the formamide absolute column densities are necessary to calculate its relative band strengths (see Section <ref>), the absolute column densities of the matrix components are not needed to find any values other than the mixing ratios.
In order to quantify the ion interference effect on formamide in each mixture, at the start of each deposition, formamide was first dosed alone, and its mass signal was given ∼5 min to stabilize before the other matrix components were introduced into the chamber. Although this meant that each experiment started with a very brief deposition of pure formamide, the deposition rate of formamide was so slow in all of the experiments (on the order of tens of monolayers per hour) that this brief pure deposition was usually not even noticeable above the noise level in the IR spectra. Then, the ratio between formamide's signal before and after the matrix molecules were added to the chamber was used as a correction factor to remove the ion interference effect from formamide's signal. An example of this correction is shown in Figure <ref> for the formamide:CH_3OH mixture, which had the highest correction factor of all the mixtures (1.11).
The ion interference effect on formamide was noticeable in all of the mixtures where the major matrix component was polar, while it was not detected above the noise level in the mixtures in which the major matrix component was apolar. In order to provide a conservative estimate of the error caused by the ion interference effect on the calculated column density of formamide, the percent difference of the formamide column density before and after the ion interference effect correction was obtained for all mixtures in which the effect was detected above the noise level. The average percent difference was ∼5%, with the highest percent difference being that of the formamide:CH_3OH mixture (∼10%). To avoid underestimating the error in any of the mixtures, we use this maximum error, ∼10%, as the uncertainty in the column density caused by the ion interference effect.
The sensitivity of a QMS can also drift over time. However, such drift is typically only significant over timescales spanning several months to a couple years, and it is more significant for absolute sensitivities than for relative sensitivities <cit.>. The contribution of this drift to the method error was assumed to be negligible here given that all the formamide mixture spectra were collected within a span of two months, and that the calibration curves were usually either determined within a few days of their use to create an ice mixture or were frequently updated with new values that were consistent with the fits to the previous values.
§.§ Error calculation
In this method of determining ice column densities, multiple sources of error must be considered. First, there is the error in the method used to determine the ice column density growth rate (dN/dt). This error can be estimated by propagating uncertainties of the variables in Equation <ref>. For all ices, the integrated absorbance growth rate uncertainty is estimated to be 10%, as mentioned in Section <ref>. For formamide, the uncertainty in the band strengths reported in this work is estimated to be 15% (also see Section <ref>). For the matrix components, literature band strength values were used (see Table <ref>). However, in the literature, variations between the band strengths reported in different publications can be large (e.g., ). For this reason, we estimate a 25% uncertainty for the literature band strengths used.
Then, we determine the uncertainty resulting from converting the integrated QMS measurement to a column density experimentally by finding the difference between the column density calculated from the QMS signal and the column density calculated from the integrated IR absorbance at the end of a pure molecule's IR measurement. Comparing these differences in formamide, H_2O, and CO deposition experiments resulted in an average error ∼2.5%. To be conservative, we estimate the error from converting the QMS measurement to a column density to be 5%.
Propagating all of these uncertainties, along with the 10% uncertainty from the ion interference effect for the formamide measurements, results in an uncertainty of ∼21% for the formamide column densities and ∼27% for the matrix column densities in the ice mixtures.
|
http://arxiv.org/abs/2307.07641v1 | 20230714220224 | 5-12 pc resolution ALMA imaging of gas and dust in the obscured compact nucleus of IRAS 17578-0400 | [
"Chentao Yang",
"Susanne Aalto",
"Sabine König",
"Santiago Del Palacio",
"Mark Gorski",
"Sean Linden",
"Sebastien Muller",
"Kyoko Onishi",
"Mamiko Sato",
"Clare Wethers"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
ACF-Net: An Attention-enhanced Co-interactive Fusion Network for Automated Structural Condition Assessment in Visual Inspection
[
August 12, 2023
===============================================================================================================================
We here present 0.02–0.04 resolution ALMA observation of the compact obscured nucleus (CON) of IRAS 17578-0400. A dusty torus within the nucleus, approximately 4 pc in radius, has been uncovered, exhibiting a usually flat spectral index at ALMA band 3, likely due to the millimeter corona emission from the central supermassive black hole (SMBH). The dense gas disk, traced by 10, spans 7 pc in radius and suggests an outflow driven by a disk wind due to its asymmetrical structure along the minor axis. Collimated molecular outflows (CMO), traced by the low-velocity components of the 32 and 32 lines, align with the minor axis gas disk. Examination of position-velocity plots of 32 and 32 reveals a flared dense gas disk extended a radius of ∼ 60 pc, infalling and rotating at speeds of about 200 and 300 respectively. A centrifugal barrier, located around 4 pc from the dynamical center, implies an SMBH mass of approximately 10^8 , consistent with millimeter corona emission estimates. The CMO maintains a steady rotation speed of 200 over the 100 pc scale along the minor axis. The projected speed of the CMO is about 80 , corresponding to around ∼ 500 , assuming an inclination angle of 80^∘. Such a kinematics structure of disk-driven collimated rotating molecular outflow with gas supplies from a falling rotating disk indicates that the feedback of the compact obscured nucleus is likely regulated by the momentum transfer of the molecular gas that connects to both the feeding of the nuclear starburst and supermassive black hole.
§ INTRODUCTION
Compact obscured nuclei (CONs) represent an extreme phase of galaxy evolution where rapid supermassive black hole growth and/or compact star-forming activity is completely obscured by gas and dust <cit.>. CONs are found in local luminous and ultraluminous infrared galaxies (U/LIRGs), with an occurrence rate of 38% in ULIRGs and 21% in LIRGs <cit.>. These CONs are extremely dusty with high column densities that obscure X-ray to far-infrared emission and sometimes only leave optically thin windows at the submillimeter to radio bands <cit.>. Rotational transitions of the vibrational excited HCN thus become a good tracer of CONs, which can probe deeper into the obscured nuclei, absorbing mid-infrared photons (e.g., 14 μm, ) and radiate at submillimeter bands. CONs are, therefore, selected based on the HCN-vib lines, considering both their luminosity and surface density. The intense infrared radiation emanating from warm dust in CONs can account for a significant fraction of the bolometric luminosity of the galaxies. This radiation is thought to be powered either by the central active galactic nuclei (AGN) and/or an extreme nuclear starburst. The exact power source of CONs is not yet fully understood <cit.>. Recent observations suggest that collimated molecular outflows (CMO) are ubiquitous in CONs <cit.>. However, the origin of the CMOs remains unclear. Understanding this could potentially shed light on the interplay between the AGN and nuclear starburst, as well as the correlation between the growth of the stellar nuclear mass and the mass of supermassive black holes (SMBHs).
To address these issues, we targeted IRAS 17578-0400 (L_IR = 2.3 × 10^11 , redshift z=0.0134 with 284 pc/), one of the LIRGs identified as a CON from the CONquest survey <cit.>. Using high-angular resolution ALMA Band-3 and Band-6 images, we are able to probe the spatial and kinematic structure of the ISM at scales of 5–12 pc for the first time, providing important clues about the nature of CONs.
§ STRUCTURE OF THE CON IN IRAS 17578-0400 REVEALED BY ALMA
We have obtained high-angular resolution ALMA observations at Band 3 (∼ 105 GHz) and Band 6 (∼ 255 GHz), with angular resolutions reaching 0.06-0.03 and 0.04-0.02(from natural weighting to uniform weighting), respectively.
Structure of the ALMA continuum.
As shown in the upper panel of Fig.<ref>, we find a very compact continuum structure from ALMA Band-3 and Band-6 images at scales of 20–30 pc. Using uniform weighting, we are able to resolve the most compact structures in the continuum in Band-6, where the source breaks into two components, with a prominent east component and a western one. These two components are also seen by the elongated structure seen in the continuum image at Band-3. Such a structure suggests either a presence of a ∼ 4 pc dusty torus around the AGN or a dual-AGN with a separation of ∼ 8 pc.
When examining the spectral energy distribution of the continuum, combined with the VJLA Ka and Ku band high-angular resolution images <cit.>, we find the SED of the CON cannot be explained by a combination of the synchrotron, free-free, and the optically thick/thin dust. The flat ALMA Band-3 in-band spectral index (∼ 1.2) and the ratio between ALMA Band-3 and Band-6 (∼ 3) indicate an additional contribution from the millimeter corona emission <cit.> is significant. Our model, derived from the fitting of the corona emission, suggests an SMBH mass of 9.1^+2.1_-3.8×10^7 .
Structure of the gas components.
We highlight here the spatial structures of HCN(3–2), ^13CO(1–0) on top of the dust continuum in the lower panel of Fig.<ref>. As in other CONs, HCN(3–2) shows a combination of a ∼ 60 pc flared rotating disk with a ∼ 120 pc CMO (north-south direction) extended perpendicular to the disk plane on each side. It is evident that ^13CO shows a disk structure surrounding the dust continuum, and the velocity structure of ^13CO suggests that it is a rotating disk with a radius of ∼ 7 pc while the kinematic center is located in between the two dust components, supporting the scenario of a gas disk surrounding the dusty torus, rotating around the central SMBH (or a revolving binary SMBHs).
Besides the rotating disk, we also find tail-like asymmetrical outflow structures, with a blue-shifted "tail" in the northeast and a red-shifted tail towards the southwest (Fig.<ref>). Similar structures have been found in Galactic circumstellar disks <cit.>. Such an asymmetry is caused by a disk-launching wind with infalling material onto the disk asymmetrically. Considering the ^13CO wind is connected to the HCN(3–2) CMO, the structure of ^13CO indicates that the CMO is likely also driven by the rotating disk.
Kinematics.
To further understand the kinematics of the CON and CMO, we have examined the structure of the position-velocity (PV) plots of HCN(3–2), as shown in Fig. <ref>. The major-axis PV plot reveals bright emissions from non-Keplerian motions. We find that an infalling rotating disk model <cit.> can well explain the PV structure. From the model, we derive an overall rotating speed of the flared gas disk of ∼ 300 with an infalling velocity of ∼ 200 , a centrifugal barrier radius of ∼ 4 pc (where the rotation speed is ∼ 300 ), suggesting an SMBH mass of 10^8 , consistent with the estimate from millimeter corona emission. From the minor-axis PV plot, we also find that the CMO extends to about 120 pc to the minor-axis direction, which is more than half smaller than the CMO identified with a lower-spatial resolution data extending to ∼ 300 pc <cit.>, possibly because the emission is resolved out. Offset (w.r.t the major axis) PV cuts perpendicular to the CMO direction also reveal that the CMO is rotating at the speed of ∼ 200 pc with little slowing down.
§ CONCLUSIONS AND IMPLICATIONS
Using high-spatial-resolution ALMA data, we study the spatial and kinematic structure of the compact obscured galactic nucleus of IRAS 17578-0400. We are able to resolve down to a few parsecs for the first time. Combining all the spatial and kinematic information together, we present a coherent model in Fig. <ref> of the nature of the CON. We identify the key kinematic structure is dominated by a flared rotating infalling disk about 60 pc in size from the outer region towards the inner 4 pc where it reaches the centrifugal barrier, and at this radius, we also see the dust torus from the continuum image. Such a rotating disk is also likely driving the CMO that launched to both sizes of the minor axis. Such a structure might be a key driver of feedback processes, transferring the gas into the central region and feeding the nuclear starburst and SMBH, linking the growth of both.
The authors acknowledge the support from the ERC Advanced Grant 789410.
[Aalto et al.(2015)]2015A A...584A..42A Aalto, S., and 19 colleagues 2015. , 584, A42.
[Aalto et al.(2020)]2020A A...640A.104A Aalto, S., and 16 colleagues 2020. , 640, A104.
[Alves et al.(2017)]2017A A...603L...3A Alves, F. O., J. M. Girart, and 6 colleagues 2017. 603, L3.
[Barcos-Muñoz et al.(2018)]2018ApJ...853L..28B Barcos-Muñoz, L., S. Aalto, and 7 colleagues 2018. , 853, L28.
[Behar et al.(2015)]2015MNRAS.451..517B Behar, E., R. D. Baldi, A. Laor, A. Horesh, J. Stevens, and T. Tzioumis 2015. , 451, 517-526.
[Falstad et al.(2021)]2021A A...649A.105F Falstad, N., and 30 colleagues 2021. , 649, A105.
[Gorski et al.(2023)]2023A A...670A..70G Gorski, M. D., S. Aalto, S. König, and 7 colleagues 2023. , 670, A70.
[González-Alfonso and Sakamoto(2019)]2019ApJ...882..153G González-Alfonso, E. and K. Sakamoto 2019. , 882, 153.
[Sakamoto et al.(2013)]2013ApJ...764...42S Sakamoto, K., S. Aalto, and 5 colleagues 2013. , 764, 42.
[Sakai et al.(2014)]2014Natur.507...78S Sakai, N., and 16 colleagues 2014. , 507, 78-80.
[Scoville et al.(2017)]2017ApJ...836...66S Scoville, N., and 22 colleagues 2017. , 836, 66.
[Song et al.(2022)]2022ApJ...940...52S Song, Y., and 25 colleagues 2022. , 940, 52.
|
http://arxiv.org/abs/2307.04955v1 | 20230711011529 | Joint Radio Frequency Fingerprints Identification via Multi-antenna Receiver | [
"Xiaofang Chen",
"Wenbo Xu",
"Yue Wang"
] | eess.SP | [
"eess.SP"
] |
IEEEexample:BSTcontrol
IEEE TRANSACTIONS ON MICROWAVE THEORY AND TECHNIQUES, VOL. 60, NO. 12, DECEMBER 2012
Roberg et al.: High-Efficiency Diode and Transistor Rectifiers
Joint Radio Frequency Fingerprints Identification via Multi-antenna Receiver
Xiaofang Chen, Student Member, IEEE,
Wenbo Xu, Member, IEEE,
and Yue Wang, Senior Member, IEEE
XXX.
XXX.
XXX.
XXX.
XXX.
October 2023
==================================================================================================================================================
In Internet of Things (IoT), radio frequency fingerprints (RFF) technology has been widely used for passive security authentication to identify the special emitter. However, few works took advantage of independent oscillator distortions at the receiver side, and no work has yet considered filtering receiver distortions. In this paper, we investigate the RFF identification (RFFI) involving unknown receiver distortions, where the phase noise caused by each antenna oscillator is independent.
Three RFF schemes are proposed according to the number of receiving antennas. When the number is small, the Mutual Information Weighting Scheme (MIWS) is developed by calculating the weighted voting of RFFI result at each antenna; when the number is moderate, the Distortions Filtering Scheme (DFS) is developed by filtering out the channel noise and receiver distortions; when the number is large enough, the Group-Distortions Filtering and Weighting Scheme (GDFWS) is developed, which integrates the advantages of MIWS and DFS. Furthermore, the ability of DFS to filter out the channel noise and receiver distortions is theoretically analyzed at a specific confidence level.
Experiments are provided when both channel noise and receiver distortions exist, which verify the effectiveness and robustness of the proposed schemes.
Emitter distortions, multiple independent oscillators, mutual information (MI), radio frequency fingerprinting identification (RFFI), receiver distortions.
§ INTRODUCTION
The number of IoT access devices deployed in practical systems is rising quickly as a result of the expansion of IoT application scenarios including automotive industry <cit.>, healthcare <cit.>, smart living <cit.>, etc. According to the International Data Corporation, approximately 40 billion IoT devices will be available globally by 2025. As a large number of IoT devices have plunged into human life, network security has emerged as a growing public concern worldwide <cit.>.
Traditional cryptography-based algorithms are used in most existing wireless communication systems to achieve secure authentication of upper-layer mechanisms <cit.>. However, these algorithms suffer from some limitations such as computer-limited assumptions, replay attack susceptibility, high communication overhead, complexity, etc <cit.>. In contrast, radio frequency fingerprints (RFF) as a very promising non-cryptographic authentication technology has recently gained a lot of research interest because of its information-theoretic security, low complexity, and high compatibility.
The concept of RFF was first introduced in 2003 <cit.>. It extracts the inherent, stable, and unique fingerprints of different emitters to distinguish their physical layer properties <cit.>.
Such fingerprints exist due to the unavoidable accuracy errors and randomness in the device production process <cit.>, which presents as the unintentional modulation at the emitter side, causing minor emitter distortions of the signal that are difficult to imitate.
Although this unintentional modulation is not conducive to the demodulation of the signal, it is the basis for RFF identification (RFFI) to extract radio frequency (RF) features with uniqueness, stability, and intrinsicality to complete the special emitter identification.
The current RF features can be categorized into transient and steady-state features <cit.>. For the transient features, though some scholars have confirmed their feasibility <cit.>, it is difficult to determine its starting point accurately due to the extremely short transient signal, which disables complete feature extraction. Therefore, a lot of researches focus on the feature extraction of the steady-state signal segment. For example, Q. Li et al. adopt the self-phase optical function to optimize variational mode decomposition (VMD) and suppress the modal aliasing after signal decomposition <cit.>. Y. Li et al. extract features through entropy information and spectral feature method <cit.>.
In addition to RF feature extraction, RFFI contains another two steps: signal pre-processing and signal classification. The signal pre-processing, e.g., transformation <cit.>, data cleaning <cit.>, etc., is used to improve the distinguishability of the subsequent extracted features among different emitters.
In terms of signal classification, some traditional classifiers, such as K-means, support vector machine (SVM), and neural networks <cit.>, are commonly adopted to classify the extracted RF features.
Although the above-mentioned works have studied various methods of each step in RFFI, few of them have taken into account receiver distortions. Such distortions unavoidably affect the accurate extraction of emitter fingerprints, and thus impact the performance of RFFI <cit.>. It has been suggested to complement additional hardware to compensate for the distortions at the receiving side <cit.>, but it might lead to extra distortions that cannot be further explored.
Taking into account the impacts of receiver distortions and complemented hardware, B. He et al. suggest that the performance of RFFI can be enhanced by utilizing the diversity gain of multiple received versions <cit.>.
Therefore, a configuration of multiple receiving antennas is expected to obtain similar benefits.
On the other hand, multiple-input multiple-output technology is indispensable in 5G communication.
It is pointed out in <cit.> that the large and heterogeneous antenna systems equipped with separate oscillators for each antenna, which generate oscillator distortions with independent identical distribution characteristics, are necessary for the future.
Thus, it is obvious that the independent oscillators generate an independent phase noise to the signal received at each antenna.
Currently, no scholars have employed this independent identical distribution property in their RFFI studies.
Considering the above two facts when multiple receiving antennas are configured, we propose three schemes to enhance the robustness and recognition accuracy of RFF.
These three schemes are respectively suited to the scenarios with different numbers of receiving antennas.
To begin with, Mutual Information Weighting Scheme (MIWS) is proposed when the number of receiving antennas is small. The MIWS is a weighting algorithm that performs a weighted voting operation on the RFFI result at each antenna. It estimates weights based on the mutual information (MI) between the emitter and the received signal at each antenna.
Then, when the number of receiving antennas is moderate, Distortions Filtering Scheme (DFS) is proposed to filter out the channel noise and the receiver distortions by exploiting the independent identical distribution property of the received signal.
Further, the Group-Distortions Filtering and Weighting Scheme (GDFWS) is proposed to solve the performance saturation phenomenon of DFS when the number of receiving antennas is large.
Finally, we use absolute accuracy and confidence level as the metrics of filtering ability to theoretically derive the minimum number of receiving antennas required to satisfy certain performance of DFS.
Thereby, the specific scenario in terms of the number of receiving antennas that is applicable for each scheme is derived.
The contributions of our work can be summarized as follows.
1)Firstly, when the number of receiving antennas is small, the MIWS scheme is proposed. It utilizes the MI between the transmitting signal and each receiving signal to measure the quality of the latter. Then, the weights of signals at each antenna are calculated accordingly to get the weighted voting of the RFFI results.
2)Secondly, when the number of receiving antennas is moderate, the DFS scheme is proposed to deal with channel noise and receiver distortions. To our knowledge, this is the first attempt to filter out the receiver distortions in current literature.
3)Thirdly, when the number of receiving antennas is large, the GDFWS scheme is proposed, which enjoys the advantages of both DFS and MIWS. The GDFWS uniformly divides all the antennas into groups first, then filters out channel noise and receiver distortions using DFS within each group to get robust RFFI results, and obtains the weighted voting result of all groups by MIWS.
4) Finally, based on the absolute accuracy and confidence level metrics, we theoretically derive the ability of DFS to filter out negative factors to determine the application scenario of DFS. The results simultaneously indicate the specific application scenarios of another two schemes.
The remainder of this paper is organized as follows. Section II briefly reviews the signal distortion model and generalizes the uplink multi-antenna received RFF system model. Three RFFI schemes with multi-antenna receive are described in Section III. In Section IV, we theoretically analyze the impact of the number of receiving antennas on the performance of DFS. Section V shows the results of the experiments, followed by the conclusion in Section VI.
§ BACKGROUND AND SYSTEM MODEL
In this section, we first describe the emitter distortion, channel, and receiver distortion models, and then the uplink multi-antenna received RFF system model is established.
§.§ Emitter distortion model
Fig. <ref> depicts a typical end-to-end transceiver link and shows the source of both the emitter and receiver distortions highlighted with red dashed lines. As shown in this figure, the distortions experienced by the transmitted signal at the emitter include filter distortions, I/Q imbalance, spurious tones, and power amplifier nonlinearities. With reference to <cit.> and <cit.>, the specific mathematical models for these distortions are given as follows.
1) Transmit shaping filter distortion:
We denote the nth transmitted symbol after constellation mapping as S_n, and the symbol interval is T_s. Subsequently, S_n passes through transmit shaping filter. Considering the inevitable filter distortions due to the limited precision during the manufacturing process, the actual transmit shaping filter is written as
g_t(t)=g_t(t)⊗υ(t),
where ⊗ stands for the convolution, g_t(t) is assumed to be the ideal transmit shaping filter, and υ(t) denotes the filter distortion. The Fourier transform form of υ(t) is as follows:
Υ(f)=A_Υ(f)e^jΦ_Υ(f),
where A_Υ(f) and Φ_Υ(f) denote the amplitude distortion and phase distortion of the filter, respectively. Current literature generally uses the second-order Fourier series model to characterize these two distortions <cit.>, such that
A_Υ(f)=ρ_0+ρ_1cos(2πf/T_A),
Φ_Υ(f)=2π q_0f+q_1sin(2πf/T_Φ),
where ρ_i, q_i, i=0,1, T_A, T_Φ are the parameters of the Fourier series. With the filter in (<ref>), the transmission symbols after filter shaping can be written as
s(t)=∑_n=-∞^∞g_t(t-nT_s)S_n.
2) I/Q distortion:
We denote the in-phase and quadrature components of s(t) in (<ref>) by s_I(t) and s_Q(t), respectively. The imbalance between s_I(t) and s_Q(t) caused by the modulator is called I/Q distortion, which is mainly manifested as a gain mismatch and quadrature error, then the signal in (<ref>) changes into
x(t) =G_Is_I(t)e^j(2π ft+ζ/2)+G_Qs_Q(t)e^j(2π ft-ζ/2)
=α s(t)e^j2π ft+β s^*(t)e^j2π ft,
where G_I and G_Q represent the gain mismatches of these two components, ζ denotes the quadrature error, and (·)^* stands for conjugate operation. To facilitate the following discussion, we define
α=1/2 (G+1)cos(ζ/2)+j/2(G-1)sin(ζ/2)
β=1/2 (G-1)cos(ζ/2)+j/2(G+1)sin(ζ/2) .
G=G_I/G_Q
3) Spurious tone:
Affected by oscillators and other active devices, DC offset commonly exists in the signal. The presence of DC offset will result in harmonic components, which we refer to as spurious tone.
Considering the impact of spurious tone, the signal in (<ref>) becomes
x^(1)(t)=x(t)+∑_i=1^Cc_ie^j2π (f+f_ζ,i)t,
where C is the number of harmonic components, c_i and f_ζ,i are the amplitude and frequency offset of the ith harmonic component, respectively. It is worth noting that when f_ζ, i=0 and c_i≠ 0, the ith harmonic component is also known as carrier leakage.
4) Power amplifier nonlinearities:
The nonlinearity of the power amplifier causes distortions in both the amplitude and phase of the signal. We express these nonlinear distortions by the Taylor series <cit.>. Considering a Taylor series of order B, the distorted signal in (<ref>) further becomes
x^(2)(t)=∑_i=0^Bb_i(x^(1)(t))^2i+1,
where b_i is the ith coefficient of the Taylor polynomial.
§.§ Channel and receiver distortion models
In this subsection, we consider the case of single antenna reception for simplicity. This discussion can be extended to multi-antenna reception easily, which will be described in detail in the next subsection. The channel attenuation is denoted by h(t) and the additive white Gaussian noise (AWGN) is defined as w(t), then the signal received by the antenna is
z(t)=h(t)x^(2)(t)+w(t).
In the aspect of receiver distortions, we concentrate on the phase noise caused by the oscillator, as well as the sampling jitter and quantization error caused by the ADC, which have a greater impact on the received signal compared with other hardware modules <cit.>. These receiver distortions are modeled as follows.
1) Phase noise:
Assume a phase-locked loop (PLL) is used at the receiver side for phase synchronization. As a typical signal frequency tracker, PLL has the advantages of high output stability and continuously adjustable phase, etc. However, it inevitably produces phase noise, under the impact of which the output signal of PLL is given as:
y^(1)(t)=h(t)x^(2)(t)e^-j(2π f^' t+θ(t))+ŵ(t)e^-j(2π f^' t),
where
ŵ(t)=w(t)e^-jθ(t),
f^' is the local oscillator frequency and θ(t) denotes its phase noise. Similar to most studies, e.g., <cit.>, we model the phase noise as a Wiener process as follows:
θ(t)=1/2πχdθ(t)/dt+c(t),
where χ is the 3 dB bandwidth of the phase noise power spectrum and c(t) is the noise obeying the standard Gaussian distribution.
2) Sampling jitter:
Sampling jitter means the deviation of the sampling point from the optimal position when the signal is downsampled by the ADC. In the presence of sampling jitter, the signal in (<ref>) changes into
y^(2)(n)=y^(1)(nT+δ(n)T),
where n denotes the nth sampling point, T is the sampling period and T≪ T_s, and δ(n) denotes the relative sampling jitter, which is a random process, and |δ(n)|≪ 1.
3) Quantization error:
The signal is quantized after sampling, and the quantization error is usually modeled as additive noise in the case of uniform quantization. The quantized version of (<ref>) is written as:
y(n)=y^(2)(n)+△(n),
where △(n) denotes the quantization error at the nth sampling point. If quantization accuracy is ϵ and the dynamic range is [-V,V], then △ (n) obeys a uniform distribution within the interval [-2^-ϵV,2^-ϵV] with a variance of 2^-2ϵV^2/3.
To summarize, based on (<ref>) to (<ref>), when the effects of the emitter distortions, channel, and receiver distortions are considered, the down-converted signal at the receiver is expressed as (<ref>) written at the top of next page, where the second equation is the simplified form of (<ref>) since sampling jitter does not affect the distribution of ŵ(nT).
§.§ RFF system model with multiple receiving antennas
Based on the single-antenna reception model in the previous subsection, this subsection extends it to a multi-antenna reception scenario. Assuming an uplink multi-antenna received RFF system model consists of M single-antenna IoT devices and a N-antenna receiver, each antenna of which is equipped with an independent oscillator.
Suppose that multiple emitters adopt orthogonal access technology to communicate with the receiver, thus this paper does not consider the interference among multiple emitters.
According to the previous subsections, the down-converted signal at the ith antenna is established as:
y_i(k,n)=h_i(k,n)e^-jθ_i(k,n)x̂_m(k,n)
+△_i(k,n)+ŵ_i(k,n)
,
where
h_i(k,n)=h_i(k,nT+δ(k,n)T),
x̂_m(k,n)=e^-j2π f^' Tδ(k,n)x_m^(2)(k,nT+δ(k,n)T),
(k,n) represents the nth sample in the kth frame. h_i(k,n) is the channel fading coefficient between the emitter and the ith antenna of the receiver, θ_i(k,n) represents the phase noise of the ith antenna at the receiver side, δ(k,n) denotes the sampling jitter, △_i(k,n) indicates the quantization error at the ith antenna, ŵ_i(k,n) is the AWGN at the ith antenna, and x_m^(2)(k,nT+δ(n)T) in the form of (<ref>) for the mth emitter.
As mentioned in the previous subsection, the antenna oscillator inevitably generates phase noise. Fortunately, it is reasonable to assume the phase noise remains constant within a single frame while varies frame-by-frame in this paper <cit.>.
Meanwhile, we assume a slow fading channel, so h_i(k,n) remains constant within a signal frame.
Based on these two assumptions, we define h_i(k)≜ h_i(k,n) and θ_i(k)≜θ_i(k, n) for any n, and
the model in (<ref>) is further simplified as
y_i(k,n)=h_i(k)e^-jθ_i(k)x̂_m(k,n)+△_i(k,n)+ŵ_i(k,n)
.
§ RFFI SCHEMES WITH MULTI-ANTENNA RECEIVER
In this section, we propose three schemes to realize RFFI as summarized in Table I. These three schemes are applicable to scenarios with different numbers of receiving antennas. As shown in Table I, MIWS is first proposed in case the number of receiving antennas is small. Then, when the number of receiving antennas is sufficient to derive the statistical characteristics of the received signals, we propose DFS to filter out channel noise and receiver distortions. Finally, to address the issue of performance saturation that DFS encounters when the number of antennas is too large, GDFWS is proposed.
Fig. <ref> depicts the framework of RFFI with these three schemes, where only one scheme is activated by the control signal according to the number of configured antennas, and the modules that we propose are highlighted in yellow.
§.§ Mutual information weighting scheme
The MI between the emitted signal and the received signal reflects their information similarity. In this paper, the larger the MI is, the less the emitter signal is affected by channel noise and receiver distortions. Based on this fact, we first estimate the MI between the emitter signal and the received signal at each antenna. Then the weight of the received signal at each antenna is set to be proportional to its corresponding MI. Finally, the RFFI result is obtained as the weighted voting of all classification results predicted from received signals.
In this subsection, we calculate the MI between the emitter signal and the received signal by taking one receiving antenna as an illustration, which can be extended to other antennas easily. For simplicity, the subscript i of the ith antenna in the subsequent discussion is ignored.
Based on the previous analysis, we know that the emitter signal x^(2)(t) undergoes channel fading h(t), AWGN w(t), and receiver distortions before completing the classification.
To facilitate subsequent analysis, we define
g(t)=h(t)x^(2)(t).
Firstly, with the definition in (<ref>), we transform the signal in (<ref>) into the frequency domain and get
Z(f)=G(f)+𝐖(f),
where Z(f), 𝐆(f), and 𝐖(f) are the expressions in frequency domain of z(t), g(t), and w(t), respectively.
According to <cit.>, the MI between 𝐆(f) and Z(f) is calculated as
ℐ(Z(f);G(f)) =ℋ(Z(f))-ℋ(Z(f)|G(f))
=ℋ(Z(f))-ℋ(W(f))
=1/2ln(2π(σ_g^2+σ_w^2))-1/2ln(2πσ_w^2)
=1/2ln(1+σ_g^2/σ_w^2)
,
where σ_g^2 and σ_w^2 are the variance of g(t) and w(t), respectively. The function ℋ(·) implements entropy calculation.
Consider a special case of (<ref>), i.e., no receiver distortion d(t) exists. In this case, σ_y^2=σ_z^2,
where σ_y^2 represents the variance of y(t). Then the special case of above equation is
ℐ_s(Z(f);G(f)) =ℐ_s(Y(f);G(f))
=1/2ln(2πσ_y^2)-1/2ln(2πσ_w^2)
=1/2ln(σ_y^2/σ_w^2)
.
Note that (<ref>) is only an ideal case, which does not exist in practical systems. To measure the quality of the received signal, we calculate the difference between (<ref>) and (<ref>) as follows,
△ I =ℐ_s(Z(f);G(f))-ℐ(Z(f);G(f))
=1/2ln(σ_y^2/σ_w^2+σ_g^2)
=1/2ln(σ_y^2/σ_w^2+σ_x^(2)^2σ_h^2)
,
where σ_h^2 and σ_x^(2)^2 indicate the variance of h(t) and x^(2)(t) in (<ref>), respectively. In practical applications, σ_w^2 and σ_h^2 can be derived by some Signal to Noise Ratio (SNR) estimation techniques <cit.>, and channel coefficient estimation techniques<cit.>. Additionally, σ_x^(2)^2 can also be estimated based on the received pilots.
Obviously, a smaller △ I indicates fewer distortions at the receiver. Based on this fact, we define the weight of y_i(t) as,
ω_i=1/△ I_i/∑_j=1^N1/△ I_j,
where △ I_i represents the MI difference at the ith receiving antenna in the form of (<ref>).
We use s_i to denote the classification result of y_i(t). Then the weighted voting of all results is implemented based on their corresponding weights in (<ref>), and the final RFFI result can be obtained.
§.§ Distortions filtering scheme
Though the MIWS described in the previous subsection realizes the reduction of the negative impact of channel noise and receiver distortions through diversity gain, a more direct way is to eliminate these negative effects.
Thus, this subsection proposes DFS to filter out the channel noise and receiver distortions when the number of receiving antennas is sufficiently large.
If not specifically mentioned, the following derivations are given for single-frame signal, thus we omit the label k.
By defining ϕ_i=h_ie^-jθ_i, the overall system model in (<ref>) is rewritten as
y_i(n)=ϕ_ix̂_m(n)+△_i(n)+ŵ_i(n).
To improve the quality of y_i(n), DFS attempts to filter out the adverse factors, i.e., ϕ_i, △_i(n), and ŵ_i(n).
First, by considering all the sampling points of all antennas, (<ref>) is converted into the matrix form
Y =Φx̂^T+Δ+Ŵ
=Ξ+Δ+Ŵ,
where
Φ=[ ϕ_1 ϕ_2 ⋯ ϕ_N ]
^T∈ℝ^N×1,
x̂=[ x̂_m(1) x̂_m(2)⋯ x̂_m(L) ]
^T∈ℝ^N×1,
Ξ=Φx̂^T∈ℝ^N× N,
Δ∈ℝ^N× L with the nth element of the ith row being △_i(n), Ŵ∈ℝ^N× L with the nth element of the ith row being ŵ_i(n), and L is the number of sampling points in a frame.
The basic idea of DFS is to recover the matrix Ξ from Y, and then recover x̂ based on its relationship with Ξ given in (<ref>). In doing so, the impacts of Φ, Δ, and Ŵ are expected to be eliminated.
To better illustrate the statistical properties of the received signals, the matrix Y is rewritten as (<ref>) shown at the top of the next page,
where
v_ij=(ŵ_i(j)+△_i(j))/ϕ_i.
It should be noted that △_i(j) and ŵ_i(j) follow the uniform and Gaussian distribution, respectively. Both of them have a mean of 0. Hence, the mean of v_ij is 0. Based on this property, we calculate the mean of each row of the matrix in (<ref>) to obtain
ℰ(Y_i,.)=ϕ_iℰ(x̂^T+v_i,.)=ϕ_iℰ(x̂),
where Y_i,. denotes the ith row of the matrix Y in (<ref>),
v_i,.=
[ v_i1 v_i2 ⋯ v_iL ],
and ℰ(u) calculates the mean of the vector u.
Based on (<ref>), it is easy to obtain
ℰ(Y_i,.)/ℰ(Y_j,.)=ϕ_i/ϕ_j.
Next, we reconstruct the lth row of Ξ from Y, the processes of which apply to the other rows of Ξ as well.
By multiplying the jth row of (<ref>) by ϕ_l/ϕ_j with j=1,2,...,N, which has been obtained by (<ref>) and (<ref>), the matrix in (<ref>) becomes (<ref>).
With the derived Y^(l) in (<ref>), by calculating its column mean, we get
ℰ_c(Y^(l))=ϕ_l
[ x̂_m(1) x̂_m(2) ⋯ x̂_m(L) ],
where ℰ_c(U) calculates the column mean of the matrix U. It is worth noting that ℰ_c(Y^(l)) is exactly the lth row of Ξ, which is denoted as Ξ_l,·.
Following the above procedures, we are able to recover all the rows of Ξ by varying the value of l from 1 to N in order.
By observing Ξ, we find that directly separating it into Φ and x̂ without any prior knowledge of x̂ is impossible. Fortunately, by defining x̂_m(1) as the first symbol of this frame, we have
Ξ=Φx̂^T=x̂_m(1)Φx^T,
where
x =x̂/x̂_m(1)
=
[ 1 x̂_m(2)/x̂_m(1) ⋯ x̂_m(L)/x̂_m(1) ]
=∑_i=1^NΞ_i,./Ξ_i,1/N.
It is obvious that x is highly correlated with x̂.
Since x retains all the information of x̂, it is reasonable to use x rather than x̂ as the signal for the subsequent RF feature extraction and classification without affecting the performance of RFFI.
We find that in (<ref>) and (<ref>) the calculation of the mean is implemented. However, in practical scenarios, it can only be approximated by averaging. To ensure the effect of filtering out channel noise and receiver distortions, L and N should be large enough. Generally, L is sufficiently large in practical applications. Therefore, the filtering ability of DFS mainly depends on the value of N, and their relationship will be analyzed in detail in Section IV.
§.§ Group-distortions filtering and weighting scheme
The previous analysis has suggested the larger N is, the smaller the difference between the averaging result and the actual mean 0.
When N is sufficiently large, this difference will be definitely very small. At this moment, even if we further increase N, the difference will converge and thus no significant performance enhancement will appear. We call such phenomenon of DFS as performance saturation.
To alleviate this problem when the number of receiving antennas is large, this subsection proposes GDFWS that divides all antennas into several groups to avoid the appearance of saturation phenomenon.
Fig. <ref> illustrates the overall structure of GDFWS, where signals received by all antennas are divided into four groups for illustration. In this figure, N_1=N/2, and N_2=N/2+1.
First, DFS is applied in each group to filter out channel noise and receiver distortions.
Then, the obtained x_i(t), i∈1,2,3,4 are delivered to the feature extraction module and weight calculation module, where the former extracts RF features that are subsequently fed into the classification module to obtain classification results s∈ℝ^1×N, and the later calculates their respective weights ω∈ℝ^1×N.
Finally, the classification result s and its corresponding weight ω are sent into the weighted voting module to obtain the final RFFI result.
Obviously, GDFWS enjoys the advantages of both MIWS and DFS in terms of diversity gain and adverse factors elimination. Meanwhile, the above structure can be easily extended to the case of multiple groups, where the number of antennas in each group is smaller than the one when DFS exhibits performance saturation.
§ THEORETICAL ANALYSIS OF DFS AND APPLICABLE SCENARIO DISCUSSION
In this section, we theoretically analyze the ability of DFS to filter out channel noise and receiver distortions with varying numbers of receiving antennas. The metrics that we consider are confidence level and absolute accuracy. By revealing the relationship between N and these metrics, we obtain the conclusion about which scheme is more desirable for the case with different numbers of receiving antennas.
We consider the asymptotic case, where L→ +∞, and then study the approximation degree of using averaging operation instead of the actual mean, which reveals its dependence on N.
By averaging each column of (<ref>), the statistical average of the kth column is derived as:
ℰ(Y^(l)_.,k) =ϕ_l·ℰ[ x̂_m(k)+v_1k ⋯ x̂_m(k)+v_Nk ]
^T
=ϕ_l(x̂_m(k))+τ_k
,
where
τ_k=ℰ[ ϕ_lv_1k ϕ_lv_2k ⋯ ϕ_lv_Nk ].
In the above equation, τ_k represents the difference between the average result and the mean 0. Therefore, the smaller τ_k is, the better the filtering effect of DFS on adverse factors, i.e., channel noise and receiver distortions.
For simplicity, we define
u_ik=ϕ_lv_ik
,
where i∈1,2, ... , N.
According to (<ref>), the u_ik in the above equation can be rewritten as
u_ik =ŵ_i(k)+△_i(k)
=w_i(k)e^-jθ_i(k)+△_i(k),
where the definition of each variable remains the same as in Section II.B. To further simplify, in the following analysis, we ignore k in the above equation, which reduces to
u_i=w_ie^-jθ_i+△_i.
To obtain the distribution of τ, i.e., τ_k in (<ref>), we first analyze the distribution of u_i. Referring to Section II.B, we know that △_i is uniformly distributed between [-2^-ϵV,2^-ϵV].
Suppose the number of quantization bits is 16, i.e., ϵ=16, and V=1, we have
△_i∼ U[-2^-16,2^-16].
The following discussions can be easily extended to the other cases of V and ϵ.
On the other hand, w_i obeys a Gaussian distribution with mean 0 and variance σ_w^2, and θ_i obeys a standard Gaussian distribution, so we easily obtain
w_ie^-jθ_i∼𝒩(0,σ_w^2).
According to (<ref>), we have
σ_w^2=σ_x^(2)^2σ_h^2/10^SNR/10,
where σ_x^(2)^2 and σ_h^2 are the same as defined in Section III.A. For better illustration, we assume that σ_x^(2)^2=1 and σ_h^2=1, thus the above equation is further simplified as
σ_w^2=10^-SNR/10.
When 0 dB≤ SNR≤ 30 dB, it is clear that
2^-16≪ 10^-3≤σ_w^2 ≤1,
thus
u_i=w_ie^-jθ_i+△_i≈ w_ie^-jθ_i∼𝒩(0,σ_w^2).
Considering that the average of N possible samples selected randomly from u_i, i∈1,2,...,N also follows the Gaussian distribution, so τ in (<ref>) obeys the Gaussian distribution. Furthermore, based on the sampling properties of the sample means <cit.>, it is known that
τ∼𝒩(0,σ_w^2/N).
Next, we discuss two performance metrics, i.e., confidence level α and absolute accuracy ξ. If τ is required to be less than ξ with a confidence level α, we have
P(|τ|<ξ)=∫_-ξ^ξ√(N)/√(2π)σ_we^-τ^2N/2σ_w^2 dτ=α.
Let τ=√(N)τ/√(2)/σ_w, then the above equation is rewritten as
P(|τ|<a)=∫_-a^a1/√(π)e^-τ^2 dτ=erf^-1(a)=α,
where a=√(N)ξ/√(2)/σ_w. As a result, the mathematical relationship between the confidence level α, absolute accuracy ξ, and the number of receiving antennas N is
ξ^2 =2[erf^-1(α)]^2σ_w^2/N
=2[erf^-1(α)]^2N^-110^-SNR/10.
Clearly, a smaller ξ indicates that DFS is more effective at filtering the adverse factors.
To measure the advantage of DFS, we define the performance gain p such that
p=ξ_1-ξ/ξ_1=1-√(1/N),
where
ξ_1=√(2)erf^-1(α)σ_w^2.
Only when the gain p is larger than a threshold p_0, we regard that introducing DFS can bring benefits.
That is
p=1-√(1/N)>p_0.
In this paper, we set p_0=1/2, and get N>4 based on the above equation. It means that when N≤4, no obvious gain can be obtained when DFS is employed.
In such a scenario, MIWS serves as an alternative.
Table II presents the relationship between N and ξ in (<ref>) when confidence level α=0.95 and SNR=15dB.
From this table, we note that the decreasing rate of ξ, i.e., △ξ, slows down as N increases. When N>128, the decreasing rate of ξ is much slower than that when N ≤ 128.
This coincides with the expectation in the previous subsection, i.e., the performance of DFS will be saturated when N is large enough.
To avoid this saturation phenomenon, we use the GDFWS scheme in the cases with medium to high SNR to divide the antenna set into groups of no more than 128 antennas each, which avoids the saturation of DFS in each group.
Table III gives the relationship between N and ξ in low SNR, where α=0.95 and SNR=5dB.
It is noted that the saturation of DFS appears when N>2048, which is much larger than that in Table II. This means that DFS is more likely to saturate with a smaller N when SNR is higher. In practical applications, the number of receiving antennas is generally limited, i.e., it will not reach 2048, thus DFS is still preferable rather than GDFWS at low SNR.
§ EXPERIMENT AND DISCUSSION
In this section, some experiments are provided to verify the efficiency of the proposed schemes.
§.§ Experimental setting
According to the produces of RFF, we describe the settings of the simulation experiments in this paper from five aspects in turn: emitters, channel, receiver, RF feature extraction methods, and classifiers, the detail of which are as follows.
1) The settings of emitters:
The RF signal is generated according to the emitter distortion model described in Section II. It should be noted that we assume the number of harmonic components in (<ref>) and the order of the Taylor series in (<ref>) both are 2. We consider 5 emitters with the distortion parameters provided in Table IV, where the E and P are abbreviations for Emitters and Parameters, respectively, and T_1 to T_5 labels for emitter 1 to emitter 5. The modulation mode of the RF signal is QPSK, with the oversampling factor T/T_s=10, 1/T_s=10^6 MHz, and the signal center frequency is 1GHz. A frame consists of 128 symbols, wherein 32 symbols carry pilots.
2) The settings of channel:
AWGN channel is considered in our experiments, so the channel fading coefficients h_i(k) in (<ref>) is set to be 1. Nonetheless, the following experimental conclusions also apply to the case where the channel coefficients are random.
3) The settings of the receiver:
The receiver distortions are generated according to the receiver distortion model described in Section II. The distortion parameters of sampling jitter and quantization errors are set as follows: δ(n)=0.003, V=1, ϵ=16. The parameter χ in (<ref>) for phase noise varies to show its influence on RFFI accuracy.
4) RF feature extraction methods:
Two classical RF feature extraction methods, i.e., least mean square (LMS)-based feature extraction <cit.> and intrinsic time-scale decomposition (ITD)-based feature extraction <cit.>, are used in our experiments.
The LMS-based feature extraction method updates its filter weights recursively based on some criteria until convergence, then uses its converged weight vector as features.
The ITD-based feature extraction method first decomposes the signal by ITD and then calculates the skewness and kurtosis of each decomposed signal as the feature vector.
5) Classifiers:
Currently, there have been many successful classifiers applied to RFFI. However, since classifiers are not the focus of our paper, we choose the typical multi-classification SVM for RFF classification.
In the following experiments, the number of training frames and testing frames for each emitter is 200 and 100, respectively. Each experiment result is obtained by averaging over 1000 trials. We use ORS to represent the original scheme without distortion filtering and the weighted voting operation, which is used as a benchmark for the proposed schemes.
§.§ Experimental results of MIWS
Fig. <ref> depicts the results of RFFI accuracy with varying SNR for MIWS, where five emitters are present, the ITD-based feature extraction method is adopted, and the receiver is equipped with four antennas with χ of 0.001, 0.01, 0.1, 1, respectively.
In this figure, UWS stands for the scheme of equally weighting the signals received by each antenna, while ORS i represents the scheme that directly uses the signals at the ith receiving antenna without any distortion filtering or weighting operation.
It is noteworthy that the results of each ORS in this figure reveal that the larger the χ, the lower the recognition accuracy, which indicates that the ITD-based feature extraction method is sensitive to the phase noise at the receiver.
Furthermore, the experimental results indicate that both MIWS and UWS outperform the ORS of each antenna, while the performance of MIWS is better than that of UWS, which highlights the benefit of setting weights according to MI.
Fig. <ref> depicts the results of RFFI accuracy with varying SNR for MIWS, where the LMS-based feature extraction method is employed, and the other settings are the same as those in Fig. <ref>.
Unlike the ORS with significant differences in Fig. <ref>, the ORS of each antenna maintains similar RFFI accuracy when χ varies in this figure. This observation suggests that the ITD-based feature extraction method is sensitive to phase noise at the receiver, whereas the LMS-based feature extraction method is robust to it. Therefore, there is no significant difference in the performance of MIWS and UWS when the LMS-based feature extraction method is applied.
We also note that both MIWS and UWS enjoy a 10% accuracy gain when compared with ORS, thereby confirming the benefit of weighting when multiple received versions are configurated.
§.§ Experimental results of DFS
Fig. <ref> shows the RFFI results of DFS, where the LMS-based feature extraction method is adopted with M=5 and χ=0.01 for all receiver antennas.
As can be seen from this figure, the superiority of DFS over ORS becomes increasingly apparent as N increases.
Such phenomenon that DFS outperforms ORS in terms of RFFI accuracy demonstrates that DFS can effectively filter out channel noise and receiver distortions.
Moreover, there are two noteworthy points: 1) when N=4, DFS performs worse than MIWS, suggesting that MIWS is more appropriate when N≤4;
2) when SNR=0dB, the performance gain of DFS over ORS is not obvious, whereas when SNR>15dB, such gain becomes more significant for different Ns, indicating that the level of the performance gain of DFS with respect to ORS is related to the SNR. These two observations are consistent with the statements in Section IV.
Fig. <ref> and Fig. <ref> show the RFFI accuracy versus receiving phase noise when using the ITD-based RF feature extraction method.
The former varies SNR with N=8, and the latter varies N with SNR=15dB.
As seen from these two figures, the larger the χ, the worse the RFFI accuracy of ORS. In contrast, DFS shows its stability and robustness under different χ, which indicates that DFS filters out receiver phase noise effectively.
Fig. <ref> illustrates that the performance of DFS becomes better with the increasing number of receiving antennas, which remains consistent with the conclusion of Fig. <ref>.
§.§ Experimental results of GDFWS
Figure <ref> provides the performance of BDFWS in comparison with DFS when the LMS-based RF feature extraction method is employed.
The receiving antennas are evenly divided into four groups, and the other settings are consistent with those in Fig. <ref>.
When SNR≥10 dB and N=256 or N=512, GDFWS with weighted voting outperforms DFS.
However, when SNR<10 dB, performance saturation does not appear in DFS, thus its performance is comparable with that of GDFWS.
Moreover, when N=128, the overall performance of GDFWS is inferior to that of DFS, which suggests that GDFWS is unsuitable for scenarios where N≤128.
Overall, the results presented in this figure demonstrate the effectiveness of GDFWS with weighted voting in scenarios where N>128 and SNR≥10 dB when compared with DFS, the finding of which is consistent with the theoretical analysis presented in Section IV.
§ CONCLUSION
This paper investigates three RFFI schemes to cater to the different numbers of receiving antennas.
When the number is small, we propose MIWS that uses the weighted voting of intermediate classification results for RFFI.
For a moderate quantity of receiving antennas, DFS is proposed to perform statistical averaging to filter out channel noise and receiver distortions.
If a large amount of receiving antennas are available, GDFWS, which enjoys the advantages of both MIWS and DFS,
is developed to solve the performance saturation problem in DFS and improve classification accuracy.
We further study the impact of the number of receiving antennas on DFS performance, and provide guidelines on selecting appropriate schemes for different scenarios in the following aspects:
1) When the number of antennas is N ≤ 4, MIWS is recommended.
2) When the number of antennas is 4<N≤128, DFS is the best choice.
3) The performance saturation in DFS occurs commonly when SNR is high. Hence, when N > 128 and SNR≥10dB, GDFWS is preferable.
ieeetr
|
http://arxiv.org/abs/2307.07281v1 | 20230714112102 | Cloud Detection in Multispectral Satellite Images Using Support Vector Machines With Quantum Kernels | [
"Artur Miroszewski",
"Jakub Mielczarek",
"Filip Szczepanek",
"Grzegorz Czelusta",
"Bartosz Grabowski",
"Bertrand Le Saux",
"Jakub Nalepa"
] | cs.CV | [
"cs.CV",
"quant-ph"
] |
Artur Miroszewski^1, Jakub Mielczarek^1, Filip Szczepanek^1, Grzegorz Czelusta^1,
Bartosz Grabowski^2, Bertrand Le Saux^3, and Jakub Nalepa^2,4
^1Jagiellonian University, prof. S. Łojasiewicza 11, 30-348 Cracow, Poland
^2KP Labs, Konarskiego 18C, 44-100 Gliwice, Poland
^3European Space Agency, Largo Galileo Galilei 1, 00044 Frascati, Italy
^4Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Cloud Detection in Multispectral Satellite Images Using Support Vector Machines With Quantum Kernels
Elisabet Burjons^1
Fabian Frei^2
Matthias Gehnen^3
Henri Lotze^3
Daniel Mock^3
Peter Rossmanith^3
^1 York University, Canada
^2 ETH Zürich, Switzerland
^3 RWTH Aachen University, Germany
==================================================================================================================================================
Support vector machines (SVMs) are a well-established classifier effectively deployed in an array of pattern recognition and classification tasks.
In this work, we consider extending classic SVMs with quantum kernels and applying them to satellite data analysis.
The design and implementation of SVMs with quantum kernels (hybrid SVMs) is presented. It consists of the Quantum Kernel Estimation (QKE) procedure combined with a classic SVM training routine.
The pixel data are mapped to the Hilbert space using ZZ-feature maps acting on the parameterized ansatz state. The parameters are optimized to maximize the kernel target alignment.
We approach the problem of cloud detection in satellite image data, which is one of the pivotal steps in both on-the-ground and on-board satellite image analysis processing chains. The experiments performed over the benchmark Landsat-8 multispectral dataset revealed that the simulated hybrid SVM successfully classifies satellite images with accuracy on par with classic SVMs.
§ INTRODUCTION
Satellite imaging plays an increasingly important role in various aspects of human activity. The spectrum of applications ranges from cartographic purposes <cit.> through meteorology <cit.>, ecology, and agronomy <cit.> to security and urban monitoring <cit.>. Consequently, dozens of terabytes of raw imaging data are generated daily from satellite constellations, such as the one built within the European Copernicus Programme. The large volume of multi- or hyperspectral images, which capture the detailed characteristics of the scanned materials, makes them difficult to transfer, store, and ultimately analyze. Therefore, their reduction through the extraction of useful information is a critical issue in real-world applications. An important step in the data analysis chain of optical satellite data is the identification of clouds. The interest is two-fold: on the one hand, such cloudy regions can be removed from further processing, as the objects of interest are likely to be obscured. On the other hand, efficient detection of cloud cover on the Earth surface is important in meteorological and climate research <cit.>.
Since the reduction is performed on a huge amount of raw data, the efficiency of this process is a key factor in practice. Therefore, it is reasonable to search for new methods for analyzing such huge datasets, improving image data classification into cloudy and clear areas.
In this paper, we investigate the classification performance of classic SVMs exploiting the radial basis function kernels, and those which benefit from the quantum kernels (introduced in Section <ref>). There are theoretical arguments <cit.> that the proposed quantum kernels are #-hard to evaluate on a classical computer. Therefore, if they provide an advantage in classification accuracy, this would advocate a strong use case for quantum computers.
Additionally, to get a deeper understanding of quantum machine learning mechanisms and show its usefulness in practice, it is pivotal to focus on widely adopted image data corresponding to real use cases.
Thus, we tackle the cloud detection task in satellite image data, which is one of the most important processing steps for such imagery.
Our experimental study performed over the benchmark multispectral image data acquired by the Landsat-8 satellite revealed that SVMs with quantum kernels offer a classification accuracy at least comparable to classic RBF kernel SVMs (Section <ref>).
§ MATERIALS AND METHODS
§.§ Data
We utilize satellite image data contained in the 38-Cloud dataset <cit.>.
The data consist of Landsat 8 scene images cropped into 384 × 384 pixel patches.
Each pixel has five values associated with it: intensity values in four spectral bands (blue: 450–515 nm, green: 520–600 nm, red: 630–680 nm, NIR: 845–885 nm) and a label corresponding to the fact whether a pixel contains a cloud or not. Therefore, the dimension of the data is m=4.
SVMs suffer from their high time and memory training complexity, which depend on the size of the training set. Since only a subset of all training vectors is annotated as support vectors during SVM training, we can effectively exploit only a subset of the most important examples <cit.>.
To find the best training data, two metrics for patches were introduced:
cloudiness 𝒞 (the ratio of cloud pixels to all pixels in the
patch), fill ℱ (the ratio of physical pixels in the patch, as patches contain scene margins). Balanced training sets from patches with properties ℱ = 100%, 40%≤𝒞≤ 60% are sampled by randomly selecting a fixed number of pixels.
§.§ Methods
In Fig. <ref>, we present a high-level flowchart of the proposed hybrid SVM procedure. First, we encode the data with the parameterized feature map consisting of ZZ-feature map acting on ansatz (see Fig. <ref>). Then we perform the Quantum Kernel Estimation (QKE) and obtain a quantum kernel, which target alignment is maximized with a classic optimization method. When optimization is finished, the final quantum kernel is passed to a classic SVM.
§.§.§ Selected Feature Map
When considering the employment of quantum computation methods, a principal question that
quickly arises pertains to the way in which classic input data will be loaded into the quantum circuit.
In general, our aim will be to construct a unitary operator for each input datum x, such that applying it to the
initial quantum zero state will leave us with a specified representation of x, |ϕ(x)⟩.
This process is called quantum embedding, while any such map x ↦ |ϕ(x)⟩ is known as a quantum feature map.
Consider the unitary transformation
U_ϕ(x) = exp( i ∑_S⊆ [n]ϕ_S(x) ∏_i ∈ S P_i ),
being a general quantum circuit Pauli expansion of an n-qubit unitary transformation.
The index S describes the connectivities between different qubits: S ∈{n k combinations, k∈{1, …, n}}, P_i are the basic Pauli gates that act on the i^th quantum register and the data mapping function is ϕ_{i}(x) = π x_i, ϕ_{i,j}(x) = π (1-x_i)(1-x_j). The number of qubits used can be identified with the dimension of the data n=m.
Following <cit.>, we restrict the above unitary to k = 2 connectivities with single-qubit gates P_{a} = Z_a, two-qubit gates P_{b,c} = Z_b Z_c, a,b,c ∈{0,…, n-1}.
This transformation is called a ZZ feature map with one repetition. It is already #-hard to calculate classically <cit.>, but shows no computational advantage over the classical kernel estimation, performed by random sampling <cit.>.
To increase the complexity of the classical simulation of the ZZ feature map, additional bases-changing layers are included by repeating the Ũ_ϕ(x) transformation
𝒰^d_ϕ(x) = ( Ũ_ϕ(x))^d = ( U_ϕ(x) H^⊗ n)^d, d ∈ℕ.
The transformation 𝒰^d_ϕ(x) is called the ZZ feature map with d repetitions.
§.§.§ Circuit Parameterization
Having defined the ZZ feature map, we turn our attention to the possibility of introducing additional parameters into the circuit to maximize the kernel target alignment of the data in the feature space.
We follow the approach of modifying the initial state of the circuit <cit.> on which the feature map acts.
The initial state will be parameterized with continuous variables θ (Fig. <ref>), with respect to which we will perform kernel target alignment optimization.
§.§.§ Kernel Target Alignment
Considering a collection of quantum states obtained by means of applying a quantum embedding |ϕ(x)⟩ = 𝒰_ϕ(x_i) V_θ | 0 ⟩^⊗ 2 to different classic input data, it is straightforward to reason about them in terms of kernel methods.
The kernel 𝒦 with respect to any two embedded data x_i, x_j can be defined as the fidelity between the resulting quantum states,
𝒦_ij = |⟨ϕ(x_i) | ϕ(x_j) ⟩|^2 = |⟨ 0 |^⊗ n V_θ^† 𝒰^d †_ϕ(x_j)𝒰^d_ϕ(x_i) V_θ | 0 ⟩^⊗ n|^2.
This kernel 𝒦 is known as the quantum kernel.
Consider a kernel function given by
𝒦̅_ij=
+1 if x_i and x_j are in the same class
-1 if x_i and x_j are in different classes.
It shows a clear distinction between classes of data points and is called an ideal kernel.
In general, in almost every situation, one will not be able to find the exact feature map, which gives rise to the ideal kernel.
Therefore, parametrized families of feature maps are used to optimize the resulting kernel matrix in such a way that it resembles the ideal kernel as closely as possible.
The function that indicates the similarity between a specific and ideal kernel matrices is called kernel target alignment <cit.>
𝒯(𝒦) = ⟨𝒦, 𝒦̅⟩_F/√(⟨𝒦, 𝒦⟩_F ⟨𝒦̅, 𝒦̅⟩_F),
where ⟨ A, B⟩_F = Tr{ A^T B } is a Frobenius inner product.
§ EXPERIMENTAL RESULTS
The objective of our study is to compare hybrid SVMs with their classic counterparts.
The results, presented in Table <ref>, are obtained by using the Aer simulator, whereas the optimization algorithm is the standard simultaneous perturbation stochastic approximation.
The SVM score is obtained from the support vector classification, with the radial basis function (RBF) kernel with default γ = 1/mσ^2 (where m=2 is the number of features and σ^2 is the variance of the data) and C = 1, the latter being the regularization parameter. For each simulation run, we randomly sample 800 pixels for the training set, and 200 pixels for the test set (the training and test sets are non-overlapping).
To keep the size of the quantum circuit minimal, we decrease the number of data features by running a principal component analysis before feeding it into the algorithm. The number of four spectral bands is reduced to two features.
The ZZ-feature map is chosen to consist of d=2 repetitions. The results of the Wilcoxon matched-pairs signed rank test show that there is no statistically significant difference between hybrid SVMs with a quantum kernel and classic SVMs with an RBF kernel (at p<0.05). Therefore, it shows that the classification methods with quantum kernels based on ZZ-feature map are, at least, competitive with classical SVM models. With the further development of quantum kernels, we expect hybrid methods to be advantageous over classical classification methods.
§ CONCLUSIONS
We introduced the design and implementation of an SVM with quantum kernels.
The proposed algorithm was experimentally verified on the cloud detection benchmark dataset.
The main takeaway from the work is that—at this stage—SVMs with the quantum kernel have a classification accuracy on par with classic SVMs with RBF kernel.
The experiment was performed with a quantum computer simulator.
In <cit.> the use of underparametrized quantum circuits for similar task was performed on the whole 38-Clouds data set. Current results are consistent with this work.
To estimate the effect of noise and better understand the computational time, more work should focus on running the algorithm on quantum computers.
We anticipate that further investigation of the quantum feature maps—including using full dimension of the dataset (m=4), new data mapping functions or different generators—will result in an additional improvement of the classification performance.
This would indicate a strong use case for quantum computers in SVM models.
IEEEbib
§.§.§ Acknowledgements
This work was funded by the European Space Agency,
and supported by the ESA Φ-lab
(<https://philab.phi.esa.int/>) AI-enhanced
Quantum Computing for Earth Observation (QC4EO)
initiative, under ESA contract No. 4000137725/22/NL/GLC/my.
AM, JM, GC, and FS were supported by the Priority Research
Areas Anthropocene and Digiworld under the program
Excellence Initiative – Research University at the
Jagiellonian University in Kraków. JN was supported
by the Silesian University of Technology grant for
maintaining and developing research potential.
|
http://arxiv.org/abs/2307.04026v1 | 20230708182039 | Dowker-type theorems for disk-polygons in normed planes | [
"Bushra Basit",
"Zsolt Lángi"
] | math.MG | [
"math.MG",
"52A40, 52A21, 52A30"
] |
Dowker-type theorems]Dowker-type theorems for disk-polygons in normed planes
B. Basit]Bushra Basit
Z. Lángi]Zsolt Lángi
Bushra Basit, Department of Algebra and Geometry, Budapest University of Technology and Economics,
Műegyetem rkp. 3., H-1111 Budapest, Hungary
[email protected]
Zsolt Lángi, Department of Algebra and Geometry, Budapest University of Technology and Economics, and MTA-BME Morphodynamics Research Group,
Műegyetem rkp. 3., H-1111 Budapest, Hungary
[email protected]
Partially supported by the National Research, Development and Innovation Office, NKFI, K-147544 grant.
[2020]52A40, 52A21, 52A30
A classical result of Dowker (Bull. Amer. Math. Soc. 50: 120-122, 1944) states that for any plane convex body K in the Euclidean plane, the areas of the maximum (resp. minimum) area convex n-gons inscribed (resp. circumscribed) in K is a concave (resp. convex) sequence. It is known that this theorem remains true if we replace area by perimeter, the Euclidean plane by an arbitrary normed plane, or convex n-gons by disk-n-gons, obtained as the intersection of n closed Euclidean unit disks. The aim of our paper is to investigate these problems for C-n-gons, defined as intersections of n translates of the unit disk C of a normed plane. In particular, we show that Dowker's theorem remains true for the areas and the perimeters of circumscribed C-n-gons, and the perimeters of inscribed C-n-gons. We also show that in the family of origin-symmetric plane convex bodies, for a typical element C with respect to Hausdorff distance, Dowker's theorem for the areas of inscribed C-n-gons fails.
[
[
=====
§ INTRODUCTION
For any integer n ≥ 3 and plane convex body K, let A_n(K) (resp. a_n(K)) denote the the infimum (resp. supremum) of the areas of the convex n-gons circumscribed about (resp. inscribed in) K. Verifying a conjecture of Kerschner, Dowker <cit.> proved that for any plane convex body K, the sequences { A_n(K) } and { a_n(K) } are convex and concave, respectively. It was proved independently by L. Fejes Tóth <cit.>, Molnár <cit.> and Eggleston <cit.> that the same statements remain true if we replace area by perimeter, where the last author also showed that these statements are false if we replace area by Hausdorff distance. These results are known to be true also in any normed plane <cit.>. Dowker's theorems have became important in many areas of discrete geometry, in particular in the theory of packing and covering <cit.> and are often used even today (see e.g. <cit.>).
Among many variants of Dowker's theorems that have appeared in the literature, we mention only one, which is related to the notion of spindle convexity. This concept goes back to a paper of Mayer <cit.> who, for any given convex body C in Euclidean space, considered sets X with the property that for any points p,q ∈ X, X contains the intersection of all translates of C containing p,q. He called these sets hyperconvex. His paper led to several papers in this topic in the 1930s and 40s, which, however, seems to have been forgotten by the end of the century. In modern times, a systematic investigation of hyperconvex sets was started in the paper <cit.> in 2007 for the special case that C is a closed Euclidean ball, and a similar paper <cit.> appeared in 2013, dealing with any convex body C (see also <cit.>).
Hyperconvex sets have appeared in the literature under several different names: spindle convex, strongly convex or superconvex sets (see e.g. <cit.>), and appear in different areas of mathematics <cit.>. In this paper, we follow the terminology in <cit.>, and call a set satisfying the property in Mayer's paper C-spindle convex, or shortly C-convex, and if C is a closed Euclidean unit ball, we call it spindle convex (see Definition <ref>).
One of the results related to spindle convex sets is due to G. Fejes Tóth and Fodor <cit.> who extended Dowker's theorems, together with their variants for perimeter, for spindle convex sets; in these theorems the role of inscribed or circumscribed convex n-gons is played by the so-called disk-n-gons, obtained as the intersections of n closed Euclidean unit disks. They also proved similar theorems in hyperbolic or spherical plane.
Our main goal is to investigate a normed version of the problem in <cit.>. To state our results, recall that the unit ball of any finite dimensional normed space is a convex body symmetric to the origin o, and any such body is the unit ball of a finite dimensional normed space. Thus, in the paper we choose an arbitrary o-symmetric convex disk C in the real normed space ^2, and work in the normed plane with unit disk C, which we regard as ^2 equipped with the norm ||·||_C of C.
In the paper, by a convex disk we mean a compact, convex planar set with nonempty interior. We denote the family of convex disks by , and the family of o-symmetric convex disks by _o. In the paper we regard and _o as topological spaces with the topology induced by Hausdorff distance.
Before presenting our results, recall the well-known fact that any finite dimensional real normed space can be equipped with a Haar measure, and that this measure is unique up to multiplication of the standard Lebesgue measure by a scalar (cf. e.g. <cit.>). This scalar does not play a role in our investigation and in the paper (·) denotes 2-dimensional Lebesgue measure.
For any C ∈_o and convex polygon Q, we define the C-perimeter of Q as the sum of the lengths of the sides of Q, measured in the norm generated by C. The C-perimeter of a convex disk K ⊂^2, denoted by _C(K), is the supremum of the C-perimeters of all convex polygons inscribed in K.
We note that, moving its vertices one by one to the boundary of K in a suitable direction, for any convex polygon Q contained in K one can find a convex polygon Q' inscribed in K with _C(Q) ≤_C(Q'). This shows, in particular, that for any two plane convex bodies K ⊆ L ⊂^2, we have _C(K) ≤_C(L), with equality if and only if K=L (see also <cit.>).
Furthermore, it is worth observing that a straightforward modification of Definition <ref> can be used to define the C-length of a rectifiable curve Γ⊂^2, denoted by _C(Γ).
Our next definition can be found in <cit.> and its origin goes back to <cit.>.
Let C ∈_o and consider two (not necessarily distinct) points p, q ∈^2 such that a translate of C contains both p and q.
Then the C-spindle (denoted as [p,q]_C) of p and q is the intersection of all translates of C
that contain p and q. If no translate of C contains p and q, we set [p,q]_C = ^2.
We call a set K ⊂^2 C-spindle convex (or shortly C-convex), if for any p,q ∈ K, we have [p,q]_C ⊆ K.
We recall from <cit.> that a closed set in ^2 different from ^2 is C-convex if and only if it is the intersection of some translates of C.
The intersection of n translates of C is called a C-n-gon for n ≥ 3.
In our next definition and throughout the paper, (·) denotes standard Lebesgue measure.
Let n ≥ 3 and let K be a C-convex disk in ^2, where C ∈_o. We set
Â_n^C(K) = inf{(Q) : Q is a C-n-gon circumscribed about K };
â_n^C(K) = sup{(Q) : Q is a C-n-gon inscribed in K };
P̂_n^C(K) = inf{_C(Q) : Q is a C-n-gon circumscribed about K };
p̂_n^C(K) = sup{_C(Q) : Q is a C-n-gon inscribed in K }.
For any C ∈_o and C-convex disk K, the sequences {Â_n^C(K) }, {P̂_n^C(K) } are convex, and the sequence {p̂_n^C(K) } is concave. That is, for any n ≥ 4, we have
Â_n-1^C(K)+Â_n+1^C(K) ≥ 2 Â_n^C(K), P̂_n-1^C(K)+P̂_n+1^C(K) ≥ 2 P̂_n^C(K), and
p̂_n-1^C(K)+p̂_n+1^C(K) ≤ 2 p̂_n^C(K).
As a consequence of Theorem <ref>, we prove Theorem <ref>, and recall that similar statements have been derived in <cit.> for the Euclidean areas of inscribed and circumscribed polygons from the classical results of Dowker in <cit.> (for their spindle convex variants, see <cit.>).
Let n ≥ 3 and k ≥ 2. Assume that k is a divisor of n and both K and C have k-fold rotational symmetry. Then there are C-n-gons Q^A, Q^P circumscribed about K which have k-fold rotational symmetry, and (Q^A)= Â_n^C(K) and _C(Q^P)= P̂_n^C(K). Similarly, there is a C-n-gon Q^p inscribed in K which has k-fold rotational symmetry, and _C(Q^p)= p̂_n^C(K).
Before our next theorem, we remark that in a topological space ℱ, a subset is called residual if it is a countable intersection of sets each of which has dense interior in ℱ. The elements of a residual subset of ℱ are called typical.
Our next result shows that Dowker's theorem for the sequence { A_n^C(K) } fails in a strong sense.
A typical element C of _o satisfies the property that for every n ≥ 4, there is a C-convex disk K with
â_n-1^C(K) + â_n+1^C(K) > 2 â_n^C(K).
The structure of the paper is as follows. In Section <ref>, we present the necessary notation and prove some lemmas. Then in Sections <ref> and <ref> we prove Theorems <ref> and <ref>, and Theorem <ref>, respectively. Finally, in Section <ref>, we collect our additional remarks and propose some open problems.
§ PRELIMINARIES
In the paper, for simplicity, for any x,y ∈^2, we denote by [x,y] the closed segment with endpoints x,y. We equip ^2 also with a Euclidean norm, which we denote by ||·||, and use the notation B^2 for the Euclidean closed unit disk centered at o. Recall that the Euclidean diameter of a compact set X ⊂^2 is the Euclidean distance of a farthest pair of points in X. If we replace Euclidean distance by distance measured in the norm of C, we obtain the C-diameter of X.
Recall that for any set X ⊆^2, the C-convex hull, or shortly C-hull is the intersection of all C-convex sets that contain C. We denote it by _C(X), and note that it is C-convex, and if X is closed, then it coincides with the intersection of all translates of C containing X <cit.>.
In the following list we collect some elementary properties of C-spindles and C-n-gons that we are going to use frequently in the paper.
We have the following.
(a) For any x,y ∈^2 with ||x-y||_C ≤ 2, [x,y]_C is the intersection of at most two translates of C, and if [x,y]_C is a translate of C, then ||x-y||_C=2.
(b) Conversely, a nonempty intersection of at most two translates of C is the C-spindle of two (not necessarily distinct) points.
(c) For any x, y ∈^2, [x,y]_C=[x,y] if and only if a translate of C contains [x,y] in its boundary.
(d) If [x,y]_C ≠ [x,y], then [x,y]_C is a centrally symmetric convex disk whose boundary consists of two arcs, connecting x and y, that are contained in the boundary of some translates of C.
(e) Any C-n-gon is the C-hull of at most n points contained in a translate of C, and vice versa.
Let x,y ∈ C ∈_o, with ||x-y||_C < 2. Then, for any sequences x_m → x, y_m → y, C_m → C with x_m,y_m ∈^2 and C_m ∈_o, we have [x_m,y_m]_C_m→ [x,y]_C.
We observe that the statement in Remark <ref> does not necessarily hold if ||x-y||_C = 2. As an example, we can choose C as a parallelogram, x_m=x and y_m=y as the midpoints of two opposite sides S_1, S_2 of C, and { C_m } as a sequence of o-symmetric hexagons inscribed in C whose elements intersect S_1 and S_2 only in x and y, respectively.
For any n ≥ 4, let ^n_a denote the subfamily of the elements C of _0 satisfying the Dowker-type inequality â_n-1^C(K) + â_n+1^C(K) ≤ 2 â_n^C(K) for any C-convex disk K. We define ^n_A, ^n_p and ^n_P similarly.
Our first lemma describes the topological properties of these families.
For any n ≥ 4, ^n_a, ^n_A, ^n_p and ^n_P are closed.
We prove the assertion only for ^n_a, as for the other quantities the proof is analogous.
Let C ∉^n_a, and suppose for contradiction that there is a sequence C_m ∈_a^n with C_m → C.
Since C ∉^n_a, there is a C-convex disk K satisfying â_n-1^C(K) + â_n+1^C(K) > 2 â_n^C(K). By Remark <ref>, if K contains points at C-distance equal to 2, then K is a C-spindle, which yields that â_j(K) = (K) for any j ≥ 3. Thus, according to our assumptions, K does not contain points at C-distance equal to 2, i.e its C-diameter is strictly less than 2. On the other hand, since K is C-convex, K is the intersection of the translates of C that contain it. Thus, there is a set X ⊂^2 such that K = ⋂_x ∈ X (x+C).
Let K_m = ⋂_x ∈ X (x+C_m). Then, clearly, K_m is C_m-convex, and K_m → K. For j=n-1,n+1, let Q_j be a C-j-gon inscribed in K such that (Q_j)=â_j^C(K). Then, as K_m → K and C_m → C, there are sequences { Q_n-1^m } and { Q_n+1^m } such that for j=n-1,n+1, Q_j^m is a C_m-j-gon inscribed in K_m, and Q_j^m → Q_j. By the properties of Hausdorff distance, the C_m-diameter of K_m is strictly less than 2 if m is sufficiently large. Then we can apply Remark <ref>, and obtain that (Q_j^m) →(Q_j) for j=n-1,n+1. From this, we have (Q_n-1^m)+(Q_n+1^m) →â_n-1^C(K) + â_n+1^C(K). On the other hand, since C_m ∈^n_a, there is a sequence { Q_n^m } such that Q_n^m is a C_m-n-gon inscribed in K_m, and 2 (Q_n^m) ≥(Q_n-1^m)+(Q_n+1^m). By compactness, we may assume that { Q_n^m } converges to a C-n-gon Q_n. Clearly, Q_n is contained in K, and by Remark <ref>, (Q_n^m) →(Q_n). Thus, â_n-1^C(K) + â_n+1^C(K) ≤ 2 (Q_n) ≤ 2 â_n^C(K); a contradiction.
Lemma <ref> readily yields Corollary <ref>, since the intersection of arbitrarily many closed sets is closed.
The family ⋃_n=4^∞_a^n of the elements C of _0 satisfying â_n-1^C(K) + â_n+1^C(K) ≤ 2 â_n^C(K) for all n ≥ 4 and all C-convex disks K is closed in _0. Similar statements hold for the families ⋃_n=4^∞_p^n, ⋃_n=4^∞_A^n and ⋃_n=4^∞_P^n.
Let C ∈_o, and let x,y be points with ||x-y||_C ≤ 2.
Then the arc-distance ρ_C (x,y) of x,y with respect to C (or shortly, C-arc-distance of x and y) is the minimum of the C-length of the arcs, with endpoints x,y, that are contained in z+(C) for some y ∈^2.
For any x,y ∈^2 with ||x-y||_C ≤ 2, if [x,y]_C ≠ [x,y], then ρ_C (x,y) = 1/2_C ([p,q]_C). Furthermore, if [x,y]_C = [x,y], then ρ_C(x,y)=||x-y||_C.
We recall the following version of the triangle inequality from <cit.>.
[Lángi, Naszódi, Talata]
Let C ∈_0, and let x,y,z be points such that each pair has a C-arc-distance.
(a) If y ∈ [x,z]_C, then ρ_C(x,y)+ρ_C(y,z) ≤ρ_C(x,z).
(b) If y ∈ [x,z]_C, then ρ_C(x,y)+ρ_C(y,z) = ρ_C(x,z).
(c) If y ∉ [x,z]_C and C is smooth, then ρ_C(x,y)+ρ_C(y,z) ≥ρ_C(x,z).
We start with a consequence of this inequality.
Let p,q,r,s ∈^2 be distinct points contained in a translate of the smooth o-symmetric convex disk C, and assume that _C {p,q,r,s} contains all of them and in this counterclockwise order. Then
ρ_C(p,q)+ρ_C(r,s) ≤ρ_C(p,r)+ρ_C(q,s).
Note that according to our conditions, the two C-arcs in the boundary of [p,r]_C intersect both C-arcs consisting of the boundary of [q,s]_C. Let s' denote the intersection point of one of the C-arcs in [p,r]_C and one of the C-arcs in [q,s], where the arcs are chosen to satisfy s' ∈_C { p,q,r } and s' ∈_C {p,r,s}. Then s' ∉ [p,q]_C and s' ∉ [r,s]_C. Since [s,s']_C, [q,s']_C ⊂ [q,s]_C, it is easy to see that p,q,s', and also r,s,s' are in C-convex position.
Thus, by Lemma <ref>, we have ρ_C(p,q) ≤ρ_C(p,s')+ρ_C(q,s') and ρ_C(r,s) ≤ρ_C(r,s') + ρ_C(s,s'), implying the assertion.
In the following lemma, let ^1 denote the Euclidean unit circle centered at the origin. For simplicity, if x,y ∈^1, we denote by xy the Euclidean closed circle arc obtained as the orbit of x when it is rotated around o in counterclockwise direction until it reaches y. Let 𝒮 denote the family of closed circle arcs xy of S. Furthermore, we say that a function f : 𝒮→ has a k-fold rotational symmetry for some positive integer k, if for any S,S' ∈𝒮, where S' is a rotated copy of S in counterclockwise direction with angle 2π/k, we have f(S)=f(S'). Lemma <ref> can be regarded as a functional form of Dowker's theorems.
Let f : 𝒮→ be a bounded function with f(xx)=0 for all x ∈^1. For any integer n ≥ 3, let
M_n = sup{∑_S ∈ X f( S ) : X ⊂𝒮 is a tiling of ^1 with |X| = n }.
If for any x_2x_3⊂x_1x_4, we have
f(x_1x_3)+f(x_2x_4) ≥ f(x_1x_4)+f(x_2x_3),
then the sequence { M_n } is concave.
Furthermore, if in addition, there is some positive integer k such that k | n and f has k-fold rotational symmetry, and there is an n-element tiling X of ^1 such that M_n = ∑_S ∈ X f(S) then there is an n-element tiling X' of ^1 with k-fold rotational symmetry such that M_n = ∑_S ∈ X' f(S).
Before the proof, we remark that X ⊂𝒮 is called an m-tiling of ^1 for some positive integer m if every point of ^1 belongs to at least m members of X, and to the interiors of at most m members of X.
To prove the assertion for { M_n }, we need to show that M_n-1+M_n+1≤ 2M_n is satisfied for any n ≥ 4. In other words, we need to show that for any tilings X={x_0x_1, …x_n-2x_n-1}, Y={y_0y_1, …y_n y_n+1} of ^1, there are tilings Z={z_0z_1, …z_n-1z_n} and W={w_0w_1, …w_n-1w_n} of ^1 such that
∑_i=1^n-1 f(x_i-1x_i) + ∑_i=1^n+1 f(y_i-1y_i) ≤∑_i=1^n f(z_i-1z_i) + ∑_i=1^n f(w_i-1w_i).
Note that the union A_0 of the two tilings is a 2-tiling of ^1.
Assume that x_1, x_2, …, x_n-1, and y_1,y_2, …, y_n+1 are in this counterclockwise order in ^1, and that y_1 ∈x_1x_2.
Due to the possible existence of coinciding points in the above two sequences, we unite these sequences as a single sequence v_1, v_2, …, v_2n in such a way that the points are in this counterclockwise order in ^1, v_1=x_1, and removing the x_i (resp. y_j) from this sequence we obtain the sequence y_1, …, y_n+1 (resp. x_1, …, x_n-1). In the proof we regard this sequence as a cyclic sequence, where the indices are determined mod 2n, and, with a little abuse of notation, we say that v_iv_j covers v_kv_l only if v_kv_l⊆v_iv_j and i < k < l < j < i+2n.
Our main goal will be to modify the 2-tiling A_0 in such a way that the value of f does not decrease but the number of covering pairs strictly decreases.
Note that since A_0 is the union of two tilings consisting of (n-1) and (n-1) arcs, respectively, A_0 contains covering pairs.
Assume that v_iv_j covers v_kv_l. Then let A_1 denote the 2-tiling of ^1 in which v_iv_j and v_kv_l
are replaced by v_iv_l and v_kv_j. According to our conditions, ∑_S ∈ A_0 f(S) ≤∑_S ∈ A_1 f(S), and the number of covering pairs in A_1 is strictly less than in A_0. Repeating this procedure we obtain a 2-tiling A_t of ^1 for which ∑_S ∈ A_0 f(S) ≤∑_S ∈ A_t f(S) and which does not contain covering pairs. Then, A_t decomposes into the two tilings {v_1,v_3, v_3v_5, …, v_2n-1v_1} and {v_2,v_4, v_4v_6, …, v_2nv_2}, each of which contains exactly n arcs. This proves the assertion for { M_n }.
Now we prove the second part. Let X be an n-element tiling of ^1 such that M_n = ∑_S ∈ X f(S). Assume that X does not have k-fold rotational symmetries. For i=1,2,…, k, let X_i denote the rotated copy of X by 2iπ/k in counterclockwise direction. Then Y= ⋃_i=1^k X_i is a k-fold tiling of ^1 with k-fold rotational symmetry, and ∑_S ∈ Y f(S) = k ∑_S ∈ X f(S). Since X has no k-fold rotational symmetry, Y contains covering pairs, and we may apply the argument in the previous paragraph.
We remark that an analogous proof yields Lemma <ref>, the proof of which we leave to the reader.
Let f : 𝒮→ be a bounded function with f(pp)=0 for all p ∈^1. For any integer n ≥ 3, let
m_n = inf{∑_S ∈ X f( S ) : X ⊂𝒮 is a tiling of ^1 with |X| = n }.
If for any x_2x_3⊂x_1x_4, we have
f(x_1x_3)+f(x_2x_4) ≤ f(x_1x_4)+f(x_2x_3),
then the sequence { m_n } is convex.
Furthermore, if in addition, there is some positive integer k such that k | n, and f has k-fold rotational symmetry, and there is an n-element tiling X of ^1 such that m_n = ∑_S ∈ X f(S) then there is a tiling X' of ^1 with k-fold rotational symmetry such that m_n = ∑_S ∈ X' f(S).
In the next lemma, by the partial derivatives (∂_p f) (p_0q_0) (resp. (∂_q f) (p_0q_0)) of the function f(pq) at p_0q_0, we mean the derivative of the function f(p(t)q_0) (resp. f(q(t)p_0)) at t=0, where p(t) (resp. q(t)) is the rotated copy of p_0 (resp. q_0) around o by angle t in counterclockwise direction.
Let f : 𝒮→ be a bounded function with f(pp) = 0 for all p ∈^1. Assume that for any p_0q_0∈^1, where p_0 ≠ q_0, (∂_p ∂_q f)(p_0q_0) is a continuous function of p_0q_0 in both variables.
Then, for any x_1, x_2, x_3, x_4 ∈^1 in this counterclockwise order, we have
f(x_1x_3)+f(x_2x_4) ≥ f(x_1x_4)+f(x_2x_3)
if and only if (∂_p ∂_q f)(p_0q_0) ≥ 0 for all p_0 ≠ q_0.
Similarly, for any x_1, x_2, x_3, x_4 ∈^1 in this counterclockwise order, we have
f(x_1x_3)+f(x_2x_4) ≤ f(x_1x_4)+f(x_2x_3)
if and only if (∂_p ∂_q f)(p_0q_0) ≤ 0 for all p_0 ≠ q_0.
We prove only the first part. Assume that (∂_p ∂_q f)(p_0q_0) ≥ 0 for all p_0 ≠ q_0. Let x_2x_3⊂x_1x_4.
Then, by the Newton-Leibniz Theorem we have
0 ≤∫_x_3^x_4∫_x_1^x_2 (∂_p ∂_q f)(p_0q_0) d p_0 d q_0 = f(x_2x_4)-f(x_2x_3)-f(x_1x_4)+f(x_1x_3).
Furthermore, if we have (∂_p ∂_q f)(p_0q_0) < 0 for some p_0 ≠ q_0, then, by continuity and the same argument, there are some points x_1,x_2 and x_3,x_4 sufficiently close to p_0 and q_0, respectively, such that x_2x_3⊂x_1x_4, and 0 > f(x_2x_4)-f(x_2x_3)-f(x_1x_4)+f(x_1x_3).
§ PROOF OF THEOREMS <REF> AND <REF>
Note that by Lemma <ref> and Corollary <ref>, it is sufficient to prove
Theorem <ref> for any everywhere dense subset of _o, and applying a similar consideration, we have the same for Theorem <ref>. Thus, we may assume that C has C^∞-class boundary and strictly positive curvature. Under this condition, the quantities defined in Definition <ref> are continuous functions of K for any fixed value of n, and thus, we may assume that K has C^∞-class boundary, and the curvature of (K) at any point p is strictly greater than the curvature of (C) at the point q with the same outer unit normal as p.
Under the above conditions, for any points p,q ∈ (K), [p,q]_C ∖{ p,q }⊂ (K).
In the proof we identify ^1 with the set / { 2kπ : k ∈ℤ}. Let us parametrize (K) as the curve Γ : ^1 →^2, where the outer unit normal vector at Γ(φ) is (cosφ, sinφ).
Then, for any two points Γ(φ_1), Γ(φ_2) with φ_1 < φ_2 < φ_1+2π, let us denote the arc of Γ connecting them in counterclockwise direction by Γ|_[φ_1,φ_2]. Furthermore, recall <cit.>, stating that K is the intersection of the translates of C containing it. Thus, for any φ∈ [0,2π], there is a unique translate x+C of C containing K with Γ(φ) ∈ (x+C).
We denote this translate by C(φ)=x(φ)+C, and call it the supporting C-disk of K at Γ(φ) (see Figure <ref>).
We define the following regions:
(i) r(φ_1,φ_2) is the closure of the connected component of K ∖ [Γ(φ_1), Γ(φ_2)]_C containing Γ|_[φ_1,φ_2];
(ii) R(φ_1,φ_2) is the closure of the connected component of (C(φ_1) ∩ C(φ_2) ∖ K) containing Γ|_[φ_1,φ_2];
(1) p(φ_1,φ_2) = _C(r(φ_1,φ_2) - _C(Γ|_[φ_1,φ_2]);
(2) A(φ_1,φ_2) = (R(φ_1,φ_2));
(3) P(φ_1,φ_2) = _C(R(φ_1,φ_2) - _C(Γ|_[φ_1,φ_2]).
§.§ The proof of Theorems <ref> and <ref> for Â_n^C(K)
Let I[X] : ^2 → denote the indicator function of X ⊂^2. Then it can be seen directly that for any φ_1 < φ_2 < φ_3 < φ_4 < φ_1+2π, the function
I[R(φ_1,φ_4)] + I[R(φ_2,φ_3)] - I[R(φ_1,φ_3)]- I[R(φ_2,φ_4)]
has nonnegative values at every point. Thus, the conditions of Lemma <ref> are satisfied, implying the statement.
§.§ The proof of Theorems <ref> and <ref> for p̂_n^C(K)
Let φ_1 < φ_2 < φ_3 < φ_4 < φ_1+2π. Then, by Lemma <ref>,
ρ_C(Γ(φ_1),Γ(φ_4))+ρ_C(Γ(φ_2),Γ(φ_3)) ≤ρ_C(Γ(φ_1),Γ(φ_3))+ρ_C(Γ(φ_2),Γ(φ_4)).
Thus, the conditions of Lemma <ref> are satisfied, implying our statement.
§.§ The proof of Theorems <ref> and <ref> for P̂_n^C(K)
By Lemmas <ref> and <ref>, it is sufficient to prove that for any φ_1 < φ_2 < φ_1+π, the function ∂_φ_1∂_φ_2 P is a continuous nonpositive function.
In the remaining part of the subsection we prove this property.
For brevity, for any α < β < α +2π, we define z(α,β) as the intersection point of (C(α)) and (C(β)) contained in the boundary of R(α,β).
First, observe that P(φ_1,φ_2) = ρ_C(Γ(φ_1),z(φ_1,φ_2))+ ρ_C(z(φ_1,φ_2),Γ(φ_2)). Clearly, since C has C^∞-class boundary, ρ_C(·,·) is a C^∞-class function, implying that P(φ_1,φ_2) is C^∞-class, and ∂_φ_1∂_φ_2 P is continuous.
Now, let 0 < | Δ_1| , |Δ_2 | ≤ε for some sufficiently small ε > 0, and set p=z(φ_1,φ_2), q_1=z(φ_1,φ_2+Δ_2), q_2 = z(φ_1 + Δ_1,φ_2) and q=z(φ_1+Δ_1,φ_2+Δ_2).
To prove the assertion, it is sufficient to prove that
0 ≥1/Δ_1( P(φ_1+Δ_1,φ_2+Δ_2)-P(φ_1+Δ_1,φ_2)/Δ_2 - P(φ_1,φ_2+Δ_2)-P(φ_1,φ_2)/Δ_2) =
= 1/Δ_1 Δ_2( P(φ_1+Δ_1,φ_2+Δ_2) - P(φ_1+Δ_1,φ_2) - P(φ_1,φ_2+Δ_2) + P(φ_1,φ_2) ).
We do it in the case that Δ_1 < 0 and Δ_2 > 0, in the other cases a straightforward modification yields the assertion.
Note that in this case it is sufficient to show that
ρ_C(p,q_1)+ρ_C(p,q_2) ≤ρ_C(q,q_1)+ρ_C(q,q_2).
For i=1,2, let v_i denote the tangent vector of C(φ_i) at p pointing `towards' q_i in its boundary, and let w_i denote the tangent vector of K at Γ(φ_i) pointing towards p in (C(φ_i)).
Let C(φ)= x(φ)+C. Then lim_Δ→ 0 ± 0x(φ+Δ)-x(φ)/|x(φ+Δ)-x(φ)| = ± v for any value of φ, where v is the unit tangent vector of (K) at Γ(φ) pointing in the positive direction.
Let Θ(φ) denote the point of (C) with outer unit normal vector (cosφ, sinφ). Then x(φ)=Γ(φ)-Θ(φ) and
more generally,
x(φ+Δ)-x(φ) = ( Γ(φ+Δ)- Γ(φ) ) - ( Θ(φ+Δ)- Θ(φ) ).
Note that lim_Δ→ 0 ± 0Γ(φ+Δ)- Γ(φ)/|Γ(φ+Δ)- Γ(φ)| = lim_Δ→ 0 ± 0Θ(φ+Δ)- Θ(φ)/|Θ(φ+Δ)- Θ(φ)| = ± v, and, by the choice of the parametrization of Γ and Θ, lim_Δ→ 0|Θ(φ+Δ)- Θ(φ)|/|Γ(φ+Δ)- Γ(φ)| = κ_Γ(φ)/κ_Θ(φ), where κ_Γ(φ) and κ_Θ(φ) denote, the curvature of Γ and Θ at Γ(φ) and Θ(φ),respectively. Thus, the assertion follows from our assumption that κ_Θ(φ) ≠κ_Γ(φ).
By Remark <ref>, C(φ_1) ∩ C(φ_2) is the C-spindle of p and another point, which we denote by p'.
By convexity, the tangent vectors of (C(φ_1)) pointing in counterclockwise direction, turn in counterclockwise direction from p to p'. Thus, the directions of the vectors v_2, w_1, v_1 are in this order in counterclockwise orientation, and the same holds for the vectors v_2, w_2, v_1.
For i=1,2, let C(φ_i+Δ_i)=y_i + C(φ_i). Then, by Lemma <ref>, if Δ_i is sufficiently small, we have that the vectors y_1,y_2 are between v_1 and v_2 according to counterclockwise orientation.
Consider the translate C_i' of C(φ_i) by q_i-p. The boundary of this translate contains q_i, and v_i is a tangent vector of C_i' at q_i. Thus, if q' = q_1+q_2-p (i.e. q' is the unique point for which p,q_1,q',q_2 are the vertices of a parallelogram in this counterclockwise order), then q' lies in the boundary of both C_1' and C_2'. On the other hand, by our observation about the tangent lines, if Δ_i are sufficiently small, then q' is contain in Q. By symmetry, ρ_C(p,q_1) = ρ_C(q',q_1) and ρ_C(p,q_2) =ρ_C(q',q_2), and thus, the required inequality follows from the remark after Definition <ref>.
§ PROOF OF THEOREM <REF>
We prove the statement in several steps. For brevity, for any points z_1,z_2, …, z_k ∈^2, we set [z_1,z_2,…,z_k] = { z_1,z_2,…, z_k } and
[z_1,z_2,…,z_k]_C = _C { z_1,z_2,…, z_k }.
Step 1.
Let us fix a Cartesian coordinate system, and consider the points p_1=(0,-1-t), p_2=(2.1,-0.9-t), p_3=(t+2,-1), p_4=(t+2,1), p_5=(2.1, 0.9+t), p_6=(0,1+t), q_1=(t,-1), q_2=(t,1), q_3=(-t,1) and q_4=(-t,-1) (see Figure <ref>). In the construction we assume that t is a sufficiently large positive value.
We define the hexagon H= [p_1,q_1,q_2,p_6,q_3,q_4] and the octagon K_1 = [p_1,p_2,…,p_6,q_3,q_4]. Note that H ⊂ K_1, and set G = (K_1) ∖(H), and G'=(K_1) ∩(H). In the following, D_1 denotes the Euclidean diameter of K_1.
We define C_1 as an o-symmetric convex 14-gon with vertices x_1,x_2,…,x_14 in counterclockwise order such that
(a) x_1 and x_8 are on the negative and the positive half of the y-axis, respectively;
(b) C_1 is symmetric to both coordinate axes;
(c) the sides [x_1,x_2], [x_2,x_3], [x_3,x_4], [x_4,x_5] are parallel to [p_1,p_2], [p_1,p_3], [p_2,p_3] and [p_3,p_4], respectively;
(d) we have ||x_2-x_1||, ||x_3-x_2||, ||x_4-x_3|| > D_1, and ||x_5-x_4||=2, i.e. [x_4,x_5] is a translate of [p_3,p_4].
Note that by our conditions, for any two point u,v ∈ G, each of the two C_1-arcs in the boundary of [u,v]_C_1 consists of translates of subsets of at most two consecutive sides of C_1, or they contain translates of [x_4,x_5] and possibly translates of subsets of the sides [x_3,x_4] and [x_5,x_6].
In particular, [p_1,p_6]_C_1 = H.
We estimate ([p_1,q,p_6]_C_1) for any q ∈ G with nonnegative y-coordinate. In the following p̅=(0,t+2) denotes the midpoint of [p_3,p_4].
Case 1: q ∈ [p̅,p_4]. Then ([p_1,q,p_6]_C_1) consists of G', parts of the segments [p_1,p_3] and [p_4,p_6], and two segments with q as an endpoint, parallel to [p_2,p_3] and [p_4,p_5], respectively. Thus, ([p_1,q,p_6]_C_1) is maximal if q=p̅, implying that
([p_1,q,p_6]_C_1) ≤([p_1,p̅,p_6]_C_1) = (H)+3/2t + 3
Case 2: q ∈ [p_4,p_5]. Assume that the x-coordinate of q is at least t+1. Then the curve ([p_1,q,p_6]_C_1) consists of G', a segment containing [p_1,q_1], a segment parallel to [p_3,p_4] and ending at q, and segment parallel to [p_4,p_6] and ending at q, and a subset of [p_5,p_6]. Observe that if t is sufficiently large, in this case ([p_1,q,p_6]_C_1) is maximal if the x-coordinate of q is equal to t+1. A similar consideration shows that if the x-coordinate of q is at most t+1, then ([p_1,q,p_6]_C_1) is maximal if q=p_5. Thus, in Case 2 we have
([p_1,q,p_6]_C_1) ≤([p_1,p_5,p_6]_C_1) =
= (H)+ ([q_2,p_4,p_5,p_6)=1/2( (H)+(K_1) ) - 2
Case 3: q ∈ [p_5,p_6]. Then ([p_1,q,p_6]_C_1) consists of G', a segment parallel to [q_2,p_6] and ending at q, a segment containing [p_1,q_1] as a subset, and a translate of [p_3,p_4]. Thus, in this case ([p_1,q,p_6]_C_1) is maximal if q=p_5, and we have
([p_1,q,p_6]_C_1) ≤([p_1,p_5,p_6]_C_1) = 1/2( (H)+(K_1) ) - 2.
Combining our results, if t is sufficiently large, for any q,q' ∈ G
([p_1,q,p_6]_C_1) + ([p_1,q',p_6]_C_1) ≤(H)+(K_1) - 4 <
< (H)+(K_1) = ([p_1,p_6]_C_1)+([p_1,p_2,p_5,p_6]_C_1),
where we used the observation that [p_1,p_2,p_5,p_6]_C_1 = K_1.
In the remaining part of the construction, we fix t in such a way that (<ref>) is satisfied.
Step 2.
In the next step, based on Step 1, we construct some C_2 ∈_o and a C_2-convex disk K_2 such that
â_3^C_2(K_2) + â_5^C_2(K_2) > 2 â_4^C_2(K_2).
Let p_7 = (-s,0), where s is sufficiently large, and set K_2 = (K_1 ∪{ p_7 }) (see Figure <ref>). Let D_2 denote the Euclidean diameter of K_2, and let C^+_1 (resp. C^-_1) denotes the set of the points of (C_1) with nonnegative (resp. nonpositive) x-coordinates.
We define C_2 as follows:
(a) C_2 is symmetric to both coordinate axes.
(b) (C_2) contains some translates u+ C^+_1 and -u+C^-_1, where u points in the direction of the positive half of the x-axis. We set w_3=u+x_1.
(c) In addition to the above two translates, (C_2) consists of segments [w_1,w_2], [w_2,w_3] and their reflections about one or both of the coordinate axes, such that [w_1,w_2], [w_2,w_3] are parallel to [p_6,p_7] and [p_5,p_7], respectively, and |w_1-w_2|, |w_2-w_3| > D_2.
We remark that if s is sufficiently large, then there is some C_2 ∈_o satisfying the above conditions, and K_2 is C_2-convex.
In the following, let Q_4 = [z_1,z_2,z_3,z_4]_C_2 denote a maximal area C-4-gon inscribed in K_2.
Let H'= (H ∪{ p_7 }) =[p_1,p_6,p_6]_C_2 and observe that K_2 = [p_1,p_2,p_5,p_6,p_7]_C_2. Then, to show the inequality in (<ref>), it is sufficient to show that (H')+(K_2) > 2 (Q_4).
Let Q = [p_1,p_5,p_6,p_7]_C_2. By the consideration in Step 1, we have that (Q) = 1/2 ((H')+(K_2))-2. Thus, we have (Q_4) ≥1/2 ((H')+(K_2))-2.
Let us define the points v_1 and v_6 as the images of p_1 and p_6, respectively, under the homothety with center p_7 and homothety ratio 1/√(s). An elementary computation shows that then v_1 = ( -(1-1/√(s))s, -1+t/√(s)) ∈ [p_1,p_7] and v_6 = ( -(1-1/√(s))s, 1+t/√(s)) ∈ [p_6,p_7]. Note that since |v_2-v_1| = 2(1+t)/√(s) < 2 if s is sufficiently large, and (C_2) contains two vertical segments of length 2, we may assume that [v_1,v_6]_C_2 = [v_1,v_6]. In other words, we may assume that there is a translate of C that contains K_2 ∖ [v_1,p_7,v_6] and does not overlap [v_1,p_7,v_6]. Thus, if z_i ∉ [v_1,p_7,v_6] for any 1 ≤ i ≤ 4, then Q_4 ⊆ K_2 ∖ [v_1,p_7,v_6], implying that in this case (Q_4) ≤(K_2) - ([v_1,p_7,v_6]) = (K_2) - 2 √(s)(1+t) < 1/2 ((H')+(K_2))-2; a contradiction. Consequently, in the following we may assume that z_4 ∈ [v_1,p_7,v_6].
Let v'_5 and v'_7 be the images of p_5 and p_7, respectively, under the homothety with center p_6 and ratio 1/√(s). Note that since there is a side of C parallel to [v_5',v_7'], we have [v_5',v_7']_C_2= [v_5',v_7'], and, as in the previous paragraph, if z_i ∉ [v_1,p_7,v_6] for any 1 ≤ i ≤ 4, then (P_4) ≤(K_2) - ([v_5',v_7',p_6]). On the other hand, we have |p_6-p_7| > s and that the length of the corresponding height of [p_5,p_6,p_7] is greater than 0.1 by the definition of p_5. Thus, ([v_5',v_7',p_6])=([p_5,p_6,p_7])/√(s^2) > 0.1 √(s), implying that since (Q_4) ≥(Q), which otherwise by our inequalities does not hold if s is sufficiently large, we may assume that some z_i, say z_3, is an element of [v_1,p_7,v_6].
We obtain similarly that if s is sufficiently large, some z_i, say z_1, is contained in the triangle [v_7”,p_1,v_2”], where v_7” and v_2” are the images of p_7 and p_2, respectively, under the homothety with center p_1 and ratio 1/√(s). These observations, the consideration in Step 1, and the inequality (Q_4) ≥(Q) yield that as s →∞, we have z_1 → p_1, z_3 → p_6 and z_4 ∈ [v_1,p_7,v_6], and min{ | z_2 - p_2|, |z_2-p_5| }→ 0, implying that in this case (Q_4) →(Q). This shows that if s is sufficiently large, then (H')+(K_2) > 2 (Q_4).
Before proceeding to the final step, we make two important observations that we are going to use. Here, by C^+_2 and C^-_2, we denote the parts of (C_2) contained in the closed half planes { x ≥ 0} and { x ≤ 0}, respectively.
(1) A straightforward modification of the construction in Step 2 yields, for any n ≥ 4, the existence of some C_n ∈_0 and a C_n-convex disk K_n such that â_n-1^C_n(K_n) + â_n+1^C_n(K_n) > 2 â_n^C_n(K_n).
(2) To guarantee the required inequalities in Steps 1 and 2, we used the properties of the arcs of C_2 entirely contained in C^+_2 or C^-_2. Thus, if C_2' is an o-symmetric plane convex body containing C^+_2 and C^-_2 in its boundary, then we have
â_3^C_2'(K_2) + â_5^C_2'(K_2) > 2 â_4^C_2'(K_2).
We combine these two observations in the following remark.
For any n ≥ 4, there is some C_n ∈_o and a C_n-convex disk K_n such that if any C_n' ∈_o contains C_n^+ and C_n^- in its boundary, where by C^+_n and C^-_n, we denote the parts of (C_n) contained in the closed half planes { x ≥ 0} and { x ≤ 0}, respectively, then K_n is C_n'-convex, and
â_n-1^C_n'(K_n) + â_n+1^C_n'(K_n) > 2 â_n^C_n'(K_n).
Step 3.
Now we prove Theorem <ref>. Let n ≥ 4. Recall that ^n_a denotes the elements C of _o such that for any C-convex disk K, we have â_n-1^C(K) + â_n+1^C(K) ≤ 2 â_n^C(K), and set ^n_a = _o ∖^n_a.
Observe that by Lemma <ref>, ^n_a is open. We show that it is everywhere dense in _o.
Let C be an arbitrary element of _o and let ε > 0. Note that for any nondegenerate linear transformation h : ^2 →^2, K is C-convex if and only if h(K) is h(C)-convex, and for any n ≥ 4, if K is C-convex, then â_n^C(K) = â_n^h(C)(h(K)).
Thus, without loss of generality, we may assume that there are vertical supporting lines of C meeting (C) at some points ± p of the x-axis. We choose our notation such that p is on the positive half of the axis.
Consider the convex disk C_n ∈_0 in Remark <ref>. Let us define the nondegenerate linear transformation h_λ, μ : ^2 →^2 by h_λ,μ(x,y)=(λ x, μ y). Then, if we choose suitable sufficiently small values μ, λ > 0, then there is a translate C^+ of h_λ,μ(C^+_n), and an o-symmetric convex disk C' containing C^+ in its boundary such that C^+ ⊂ (C+ ε B^2) ∖ C, and C ⊂ C'. Then C' ∩ (C+ ε B^2) ∈_o contains translates of h_λ,μ(C^+_n) and h_λ,μ(C^-_n) in its boundary, the Hausdorff distance of C and C' is at most ε, and, if we set K'=h_λ,μ(K_n), by Remark <ref> we have
â_n-1^C'(K') + â_n+1^C'(K') > 2 â_n^C'(K').
Thus, ^n_a is everywhere dense, which immediately yields that ⋂_n=4^∞^n_a is residual, implying Theorem <ref>.
§ REMARKS AND QUESTIONS
For C ∈_o, K ∈ and positive integer n ≥ 3, let
P̅_n^C(K) = inf{_C(Q) : Q is a convex n- gon circumscribed about K };
p̅_n^C(K) = sup{_C(Q) : Q is a convex n- gon inscribed in K }.
As we have observed in the introduction, it is known <cit.> that for any C ∈_o and K ∈, the sequences {P̅_n^C(K) } and {p̅_n^C(K) } are convex and concave, respectively. Our approach yields a new proof of these statements by applying Theorem <ref> for λ C, where λ→∞.
Applying Theorem <ref> for λ C with λ→∞, we obtain the following.
Let C ∈_o, K ∈ and n ≥ 3. If, for some positive integer k,
Let C ∈_o, K ∈, n ≥ 3 and k ≥ 2. Assume that k is a divisor of n and both K and C have k-fold rotational symmetry. Then there is a convex n-gon Q^P circumscribed about K with _C(Q^P)= P̅_n^C(K) such that Q^P has k-fold rotational symmetry. Similarly, there is a convex n-gon polygon Q^p inscribed in K which has k-fold rotational symmetry, and _C(Q^p)= p̅_n^C(K).
In the remaining part of the paper, we denote the set (1,∞) ∪{∞} by [1,∞].
Let p,q ∈ [1,∞] satisfy the equation 1/p + 1/q = 1. For any K, L ∈, G. Fejes Tóth <cit.> introduced
the weighted area deviation of K,L with weights p,q as the quantity ^p,q(K,L)=p (K ∖ L) + q (L ∖ K).
He proved that if for any K ∈, a̅_K^C(n,p,q) denotes the minimal weighted area deviation of K and an arbitrary convex n-gon, then the sequence {a̅_K^C(n,p,q) } is convex. Based on this idea, we introduce the following quantity.
Let p,q ∈ [1,∞] satisfy the equation 1/p + 1/q = 1, and let C ∈_0, K ∈_0.
We call the quantity
_C^p,q(K,L) = p ( _C((K) ∖(L))- _C((L) ∩ K) ) +
+ q ( _C((L) ∖(K)) - _C ((K) ∩ L) )
the weighted C-perimeter deviation of K,L with weights p,q. Here we note that by convexity,
_C((K) ∖(L)) ≥_C((L) ∩ K) and _C((L) ∖(K)) ≥_C ((K) ∩ L), with equality if and only if K ⊆ L and L ⊆ K, respectively. Let p̅_K^C(n,p,q) denote the minimal C-perimeter deviation of K and an arbitrary convex n-gon. We remark that if K is C-convex, by replacing the convex n-gons in the definitions of a̅_K^C(n,p,q) and p̅_K^C(n,p,q) with C-n-gons, we may analogously define the quantities â_K^C(n,p,q) and p̂_K^C(n,p,q), respectively.
This leads to the following problems.
Prove or disprove that for any p,q ∈ [1,∞ ] with 1/p + 1/q = 1, C ∈_o and K ∈, the sequence {p̅_K^C(n,p,q) } is convex.
Prove or disprove that for any p,q ∈ [1,∞ ] with 1/p + 1/q = 1, C ∈_o and C-convex disk K ∈, the sequence {p̂_K^C(n,p,q) } is convex. Does the same hold for {â_K^C(n,p,q) } if C is the Euclidean unit disk?
Before our last problem, we remark that â_K^C(n,1, ∞) = (K) - â_K^C(n) and â_K^C(n,∞,1) = Â_K^C(n)-(K).
Is there a value p_0 ∈ (1,∞) such that for any p with p_0 < p ≤∞ and q satisfying 1/p + 1/q = 1, for any C ∈_o and C-convex disk K ∈, the sequence {â_K^C(n,p,q) } is convex?
Bambah R.P. Bambah and C.A. Rogers, Covering the plane with convex sets, J. London Math. Soc. 27 (1952), 304-314.
BCC2006 K. Bezdek, R. Connelly and B. Csikós, On the perimeter of the intersection of congruent disks, Beiträge Algebra Geom. 47 (2006), 53-62.
BL23 K. Bezdek and Z. Lángi, From the separable Tammes problem to extremal distributions of great circles in the unit sphere, Discrete Comput. Geom., DOI: 0.1007/s00454-023-00509-w
BLNP K. Bezdek, Z. Lángi, M. Naszódi and P. Papez, Ball-polyhedra, Discrete Comput. Geom. 38 (2007), 201-230.
ChDT R. Chernov, K, Drach and K. Tatarko, A sausage body is a unique solution for a reverse isoperimetric problem, Adv. Math. 353 (2019), 431-445.
Dowker C.H. Dowker, On minimum circumscribed polygons, Bull. Amer. Math. Soc. 50 (1944), 120-122.
Eggleston H.G. Eggleston, Approximation to plane convex curves. (I) Dowker-type theorems, Proc. London Math. Soc. (3) 7 (1957), 351-377.
GFT G. Fejes Tóth, On a Dowker-type theorem of Eggleston, Acta Math. Sci. Hungar. 29 (1977), 131-148.
GFTandLFT G. Fejes Tóth and L. Fejes Tóth, Remark on a paper of C. H. Dowker, Periodica Math. Hungar. 3 (1973), 271-274.
TF2015 G. Fejes Tóth and F. Fodor, Dowker-type theorems for hyperconvex discs, Period. Math. Hungar. 70 (2015), 131-144.
LFTSzeged L. Fejes Tóth, Some packing and covering theorems, Acta Sci. Math. (Szeged) 12/A (1950), 62-67.
LFTperim L. Fejes Tóth, Remarks on polygon theorems of Dowker, Mat. Lapok 6 (1955), 176-179 (Hungarian).
regfig L. Fejes Tóth, Regular Figures, Macmillan, New York, 1964.
HSTV H. Huang, B.A. Slomka, T. Tkocz and B. Vritsiou, Improved bounds for Hadwiger’s covering problem via thin-shell estimates, J. European Math. Soc. 24 (2022), 1431–1448.
JMR T. Jahn, H. Martini, and C. Richter, Ball convex bodies in Minkowski spaces, Pacific J. Math. 289(2) (2017), 287–316.
LNT2013 Z. Lángi, M. Naszod́i and I. Talata, Ball and spindle convexity with respect to a convex body, Aequationes Math. 85 (2013), 41-67.
MM22 A. Marynych and I. Molchanov, Facial structure of strongly convex sets generated by random samples, Adv. Math. 395 (2022), 108086.
Mayer A.E. Mayer, Eine Überkonvexität, Math. Z. 39 (1935), 511-531.
MSW H. Martini, K. Swanepoel and G. Weiss, The geometry of Minkowski spaces - a survey. Part I, Expo. Math. 19 (2001), 97-142 .
Molnar J. Molnár, On inscribed and circumscribed polygons of convex regions, Mat. Lapok 6 (1955), 210-218 (Hungarian).
Prosanov R. Prosanov, On a relation between packing and covering densities of convex bodies, Discrete Comput. Geom. 65 (2021), 1028–1037.
Thompson A.C. Thompson, Minkowski geometry, Encyclopedia of Mathematics and Its Applications 63, Cambridge University Press, New York, USA, 1996.
Vincensini P. Vincensini, Sur les figures superconvexes planes, Bull. Soc. Math. France 64 (1936), 197-208.
|
http://arxiv.org/abs/2307.04433v1 | 20230710091541 | Holographic Gubser-Rocha model does not capture all the transport anomalies of strange metals | [
"Yongjun Ahn",
"Matteo Baggioli",
"Hyun-Sik Jeong",
"Keun-Young Kim"
] | cond-mat.str-el | [
"cond-mat.str-el",
"hep-th"
] |
=1
tarburst.fd
|
http://arxiv.org/abs/2307.05181v1 | 20230711113351 | Signatures of Ultralight Bosons in Compact Binary Inspiral and Outspiral | [
"Yan Cao",
"Yong Tang"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-ph"
] |
^aSchool of Astronomy and Space Sciences, University of Chinese Academy of Sciences (UCAS), Beijing 100049, China
^bSchool of Fundamental Physics and Mathematical Sciences,
Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China
^cInternational Center for Theoretical Physics Asia-Pacific, Beijing/Hangzhou, China
^dNational Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
Ultralight bosons are well-motivated particles from various physical and cosmological theories, and can be spontaneously produced during the superradiant process, forming a dense hydrogen-like cloud around the spinning black hole. After the growth saturates, the cloud slowly depletes its mass through gravitational-wave emission. In this work we study the orbit dynamics of a binary system containing such a gravitational atom saturated in various spin-0,1,2 superradiant states, taking into account both the effects of dynamical friction and the cloud mass depletion. We estimate the significance of mass depletion, finding that although dynamical friction could dominate the inspiral phase, it typically does not affect the outspiral phase driven by the mass depletion. Focusing on the large orbit radius, we investigate the condition to observe the outspiral, and the detectability of the cloud via pulsar-timing signal in the case of black hole-pulsar binary.
Signatures of Ultralight Bosons in Compact Binary Inspiral & Outspiral
Yong Tang^a,b,c,d
August 12, 2023
======================================================================
§ INTRODUCTION
Ultralight bosons are well-motivated particles from various theories beyond the Standard Model and can be good candidates of dark matter (DM) <cit.>. Their non-gravitational couplings to normal matter are generally predicted to be extremely weak, so that experimental and astrophysical searches of these direct couplings can be rather difficult, and typically reply on the assumption of the boson's background abundance. The black hole (BH) superradiance (SR) <cit.>, however, provides a natural astrophysical mechanism to produce these bosons solely from their minimum coupling to gravity. Due to the rotational superradiant instability in Kerr background, macroscopic condensate of free spin-0,1,2 bosons can spontaneously develop around their host BHs by extracting their energy and angular momentum. The observational signatures of the resulted cloud-BH systems, so-called gravitational atoms (GAs), provide promising ways of detecting these ultralight degrees of freedoms <cit.>.
If a GA is part of a binary system, further interesting phenomenology arises already in the perturbative regime, such as the orbit-cloud resonances <cit.>, dynamical friction (DF) or ionization <cit.>, cloud-induced suppression of the SR instabilities <cit.> and the cloud-induced periastron precession <cit.>. In discussion of these effects the gravity of the cloud is usually neglected (an exception being <cit.>, which studies the orbitdf of a small companion at relatively small distance from the cloud in scalar SR ground state), and the cloud mass is usually assumed to be not much smaller than its initial saturated value (an exception being <cit.>). However, as first pointed out in <cit.> for the scalar GA, if the cloud mass is included in the orbit dynamics, the intrinsic mass depletion of the cloud (DC) due to its gravitational-wave (GW) emission would affect the orbit evolution in an opposite manner with other dissipative effects, i.e., it tends to make the binary outspiral, and this effect is actually important at large radius. Recently, the cloud mass depletion have also been considered in <cit.> for scalar cloud (albeit using a different sign for the effect of mass depletion), and in <cit.> implicitly for the case of a relativistic vector cloud in the SR ground state, but neglecting all other cloud-induced dissipations. A binary system with time-varying mass was treated in <cit.>. Besides the SR clouds, there are proposals to search for the anomalous orbit evolution due to mass variation arised, e.g., from enhanced BH evaporation <cit.> and the accretion of background DM into the BH <cit.>.
In this work, we present a systematic model for the GA+companion system, describing various spin-0,1,2 SR states in a unified way. The focus is to study the interplay between binary GW emission, dynamical friction and the cloud mass depletion, and to compare the situations for GA saturated in different SR states. The possibilities of outspiral also have implications on the secular evolution of such systems, and whether the binary could undergo fine and hyperfine resonances (taking place also at large radius). Finally, the cloud depletion may already leave imprints on the orbit evolution that is directly observable via GW and pulsar timing measurements, even in the absence of resonance events and dramatic mode-mixings. Indeed, we find that for scalar and vector atoms in their SR ground states, there are observable parameter space that is largely independent on the dynamical friction, though the inclusion of DF enlarges the observable regions.
The structure of this paper is as follows: In Sect. <ref> we review the properties of an isolated GA and formulate the binary model. Then in Sect. <ref> we discuss the orbit evolutions quantitatively, and in Sect. <ref> investigate the detectability of the cloud from orbit phase measurements. Finally, we summarize our results and discuss some possible future directions in Sect. <ref>. Throughout our discussion, unless stated otherwise, we use the natural unit with ħ=G=c=1.
§ GRAVITATIONAL ATOM IN BINARY
§.§ Gravitational Atom
First we briefly summarize the main properties of the non-relativistic superradiant clouds around Kerr BH. For a real bosonic field Φ, far away from a central mass M, the wave function Ψ defined by Φ=1/√(2μ)(Ψ e^-iμ t+c.c.) (and 1/√(μ)Ψ e^-iμ t if Φ is complex, but we shall focus on the real fields) satisfies the Schrodinger equation
i∂_tΨ=-1/2μ∇^2Ψ-α/rΨ ,
where μ is the mass of the boson and α≡GMμ/ħ c=Mμ the gravitational fine structure constant. For scalar fields Ψ=ψ, for Proca fields [Ψ]_i=ψ_i and for spin-2 tensor fields [Ψ]_ij=ψ_ij. This is as same as the Schrödinger equation for hydrogen atom (for each field component) with well-known bound state solutions, in case that the central body is a Kerr BH, these hydrogenic states can be spontaneously populated by rotational superradiance and a GA is formed. In this work we focus on GA with α≪ 1 (hence the Bohr radius r_c=α^2M ≫ M), for which this non-relativistic Newtonian description is appropriate. The mass density of the cloud (same for both real and complex Φ fields) is given by ρ=M_cTr(Ψ^†Ψ), where we choose the normalisation ∫ d^3x ρ=M_c and M_c is the total mass of the cloud. For convenience we also define β≡ M_c/M.
The cloud is generally a superposition of all bound atomic levels, |Ψ⟩=∑_ic_i|Ψ_i⟩. Then using an orthonoraml basis ⟨Ψ_i|Ψ_i'⟩=δ_ii', we have ∑_i|c_i|^2=1. However, the modes are not static, in the case of a single occupied mode this time-dependence can be absorbed to M_c(t). For multiple modes this is not feasible, and it's more convenient to track the evolution of individual c_i.
The eigenstates are labeled by the quantum numbers n,l,j,m, the principal, orbit angular momentum, total angular momentum and azimuthal quantum number, respectively (for scalar GA, j=l, so we write n,l,m), corresponding to an orthonormalized wave function |nljm⟩ = Ψ_nljm(t,𝐫) (the detailed forms are listed in Appendix. <ref>). For a spin-s field, the quantum numbers satisfy n≥ 1, l∈ [0,n-1], j∈ [l-s,l+s], m∈[-j,j]. The real part of the energy level ω≡ω_R+iω_I is given by μ(1-α^2/2n^2+𝒪(α^4)). Crucially, ω also contains an imaginary part, and the superradiant growth can occur only if ω_I<0, i.e., ω_R/m<Ω_H≡1/2Mχ/1+√(1-χ^2), which demands a large enough BH spin χ. Starting with a sufficiently fast-spinning bare black hole, the superradiant growth is expected to be dominated by the most unstable mode, which is the 211 state for scalar GA, the 1011 state for vector GA and the 1022 state for spin-2 GA [The situation is a little more complicated for spin-2 atom <cit.>, the 1022 and 2111 states grow simultaneously, yet by the time 1022 state saturates with Ω_H spinning down to ω_R/2, the 2111 state has been completely reabsorbed. For comparison, we shall also include the possibility of a saturated 2111 state. Also, for α∼𝒪(0.1), the fastest growing spin-2 state is a non-hydrogenic dipole mode with m=1 <cit.>, but here we are interested in the regime α≪ 1.]. A saturated cloud then slowly decays by its intrinsic gravitational-wave emission after the instability saturates, until the growth of the next unstable mode <cit.>, which is the 322 state for scalar GA and the 1011 state for vector GA. The cloud may also be occupied by multiple modes <cit.>, but in this work we focus on a single saturated mode, which would be characterized by its mass distribution ρ(𝐫) and mass depletion rate P_gw,c. Generically, the density profile and depletion power have the scaling form
ρ(𝐫)=M_c/r_c^3g(x,θ) ,
with x≡ r/r_c, and
P_gw,c≈β^2 p(α) ,
where the dimensionless function g(x,θ) and p(α) are state-dependent. We use the latest accurate polynomial fits of p(α) provided in <cit.> for scalar and vector states, and the analytical approximation for tensor states with α≪ 1 calculated in <cit.>. In physical units,
ρ(x)=g(x,θ)β/0.1(α/0.1)^6(M_⊙/M)^2× 3.46× 10^34 GeV/cm^3.
The function g(x,π/2) for various SR states are plotted in Fig. <ref> and their complete expressions are listed in Appendix. <ref>. The interested bosonic field has a typical mass
μ= 1.3× 10^-11 eV×α/0.1M_⊙/M.
Neglecting the change of black hole mass, we have Ṁ_c=2ω_I M_c=M_c/τ_I during the superradiant growth. Thus the cloud mass grows exponentially in an instability timescale τ_I≡ 1/2ω_I (note this is a decelerating process since ω_I∝χ in the Newtonian limit). When the mass change of the cloud is dominated by GW emission, -Ṁ_c=P_gw,c∝ M_c^2, the cloud mass decays according to
M_c(t)=M_c,0/1+t-t_0/τ_gw ,
where τ_gw≡M_c,0/P_gw,c=M/β_0p(α) is the mass depletion timescale. For α≪ 1, τ_gw≫τ_I, the GW emission can be neglected in the superradiant growth, and the cloud mass after the superradiant growth (of a mode with azimuthal quantum number m) is <cit.>
M_f=m^3-√(m^6-16 m^2 ω_R^2(m M_i-ω_R J_i)^2)/8 ω_R^2(m M_i-ω_R J_i) ,
d
M_c,0=M_i- M_f, J_f=J_i-m/ω_R(M_i-M_f) ,
where M_i, J_i=M_i^2χ_i are the initial BH mass and spin, respectively. For χ_i≈ 1, M_c≈α/m M_i and the saturated BH spin a_f≈4α/m. The spin can be further extracted by the next growing mode m'. Then the saturated values become M_c≈4(m'-m)/mm'^2α^2 M_i and a_f≈4α/m'.
§.§ Binary Orbit Dynamics
Now we consider the situation when the saturated GA belongs to a binary system[In principle, the companion can also carry its own environment, but we neglect this since the companion is assumed to be small.], and contrast the various possible effects induced by the cloud. We mainly focus on Keplerian equatorial circular orbits (for a brief discussion of inclined and elliptical orbits see Appendix. <ref>) and assume a large orbit radius so that the cloud's tidal distortion is completely negligible. As we shall see, the binary motion can still receive significant modifications due to the presence of the cloud.
We take the BH mass M and the companion's mass M_*≡ qM to be constant, the Newtonian orbit energy and angular momentum are given by
E_orb=-(M+M̃_c)M_*/2r ,
L_orb=
√(((M+M̃_c)M_*)^2/M+ M̃_c+M_*r) ,
where M̃_c(𝐫) is the effective cloud mass experienced by the companion (the detailed definition is given in the Appendix. <ref>). In the following we shall take the limit M+M̃_c≈ M. But to restore M̃_c one need just the replacement M→ M+M̃_c and q→M/M+M̃_cq. However, such corrections remain small and do not affect the main results. The orbit evolves according to
-Ė_orb=P_gw+P_others ,
where P_gw is the binary GW radiation power:
P_gw(x)=32/5α^10 (1+q)q^2/x^5 ,
(the correction to this power due to cloud depletion is negligible, see Appendix. <ref>) and P_others is the contribution from extra dissipation channels. In the Newtonian order, the orbit evolution can be written as
ẋ=-2/qMα^2P(x) x^2 ,
where P is the net effective power, which crucially also includes a contribution
P_DC=qα^2/2(1+q)xdM̃_c/dt ,
from the mass change of the system due to the cloud's GW emission (which can be approximated as an isotropic mass loss of the host BH, see Appendix. <ref>). Hence in the present case,
P_DC=-M̃_c/M_cqα^2/2(1+q)xP_gw,c ,
and
P=P_gw+P_others+P_DC .
For P_others, we examine the following effects.
§.§.§ Mode-Mixing and Dynamical Friction
In the presence of a companion body, there can be “global” exchange of angular momentum between the cloud and the orbit mediated by the companion's gravitational potential. In the perturbative regime, the companion's gravitational influence is fully captured by a V_*(t,𝐫) in the Schrodinger equation of the bosonic field <cit.>. The resulted cloud evolution due to non-zero level-mixing H_ab=⟨Ψ_a|V_*|Ψ_b⟩ backreacts on the orbit dynamics and leads to rich phenomenology. There are two types of mixing: the mixing between bound states, and the mixing between a bound state with infinite continuum states. The former is responsible for the resonance effects occurring at a discrete set of orbit frequencies Ω=Δω_R/Δ m <cit.> and the modifications of the cloud's superradiant instabilities <cit.>, while the latter leads to a continuous process so called “ionization" of the bound state <cit.>.
When the binary orbit frequency is off-resonance, the effect from bound-state mixing is expected to be unimportant <cit.>. In <cit.> it has been argued that ionization is actually the manifestation of dynamical friction in the GA system. Since the ionization for higher-spin field has not yet been calculated, in this work we would still use the model of <cit.> to estimate the consequence of DF. In this DF model, a test body traveling in a uniform non-relativistic scalar field background with relative velocity V experiences a gravitational drag force:
𝐅_DF=-4π M_*^2ρ(𝐫)/V^3C_Λ(ξ,μ Vr_Λ)𝐕 .
In the present case, 𝐕 is the companion's velocity relative to the cloud, V=|v ∓m/μ r|=
α| √((1+q)x)∓ m|x^-1 (the plus/minus sign corresponds to counter-rotating/co-rotting orbit), for large radius it's dominated by the orbit velocity v, and ξ≡GM_*μ/ħ V≈q√(x)/√(1+q). The uncertainly of DF lies mainly in the estimation of C_Λ, which in the present problem depends solely on the orbit radius (for circular orbit the DF force is also expected to have a radial component <cit.>, it is however irrelevant to the orbit dissipation). In this model it's given by <cit.>
C_Λ(y)=Cin(2 y)+sin 2 y/2 y-1 ,
for ξ≪ 1. Following <cit.> we choose the IR regularization scale r_Λ to be the cloud size measured by r_97=x_97r_c for orbit radius x>x_97, hence y=μ vr_Λ=√(1+q/x)x_97. The resulted dissipation power due to DF is then
P_DF=-𝐅_𝐃𝐅·𝐯
=4π q^2M^2/α√(1+q)ρ(x)C_Λ√(x)≡q^2α^5β/√(1+q)𝒫(x) .
Since the non-relativistic Coulomb scattering problem (based on which the DF above is derived <cit.>) is same for each component of the wave function of a higher spin field, the result can be generalized simply with ρ=M_cTr(Ψ^†Ψ).
Strikingly, we find that for scalar cloud this estimation (with V≈ v) agrees well (in the overall trend and magnitude) with the ionization power calculated in <cit.>, see Fig. <ref>. Indeed, the scaling form of ionization power is same with Eq. (<ref>) for V≈ v if q≪ 1 (the result only changes slightly in the small radius after including the cloud velocity in DF model), but the exact reason of this matching is unclear (in particular why the regularization scale x/x_97 match the counter/co-rotating ionization, respectively). Nevertheless, this demonstrates that the DF and ionization model are indeed compatible, though the DF model tends to overestimate the orbit dissipation (especially for the co-rotating orbit) for smaller radius and without the feature of discontinuity. For higher-spin field, the ionization power has not yet been calculated. Since the difference lies mainly in the angular mixing, we expect a similar result.
§.§.§ Accretion
If the companion is a BH, besides friction, additional drag force arises due to its accretion of the ambient cloud. In a uniform background of ultralight scalar field, the force due to accretion is 𝐅_acc=-Ṁ𝐕 with Ṁ≡σρ V. For V>2π M_*μ the absorption cross-section can be approximated <cit.> as σ=A/V, where A∼ 16 π M_*^2 is the area of BH horizon (see also <cit.>), while for V<2π M_*μ the result is σ=32π^2M_*^3μ/V^2. The effective power due to accretion in the two regimes are (assuming a non-spinning BH)
P_acc=16π q^2M^2α^2ρ(x) x^-1 ,
and
P_acc=32π^2q^3M^2α^2 ρ(x)x^-1/2 ,
The accretion power from both estimations are strongly suppressed relative to the dynamical friction as
P_acc/P_DF<α^3/C_Λ .
and the effect from cloud mass loss due to accretion is even smaller (suppressed relative to P_acc by q^2). Although there are currently no quantitative computations for the accretion rate of higher-spin massive bosonic fields, we expect the result will be at the same order of magnitude. Hence we shall neglect the companion's mass accretion in the following.
§ BINARY EVOLUTION
In this section we analyse the binary orbit evolution under DC and DF. The GA is approximately rigid provided that it is off-resonance and the companion's perturbation is small V_*/(-α/r)∼ q(x_97/x)^3≪ 1, so besides the extreme-mass-ratio system with q≪ 1, this model can be applied to binary with larger mass ratio so long as x is sufficiently large <cit.>. Since the innermost stable circular orbit (ISCO) radius x_isco=6α^2 of the host BH is deep inside the cloud where the binary may subject to strong mode-mixing or even non-perturbative effects, our discussion would be restricted to the phase of orbit evolution at large radius, specifically we would take x>10.
The evolution of circular orbit is given by Eq. (<ref>), where the power function P(x) depends solely on β, α and q, so the BH mass M only affects the overall time scale[This is the case even if the time-dependent depleted value of β is used, since the depletion time is also proportional to M.]. At radius x, the binary GW frequency is
f =Ω/π=[(1+q)^1/2α^3/π M]x^-3/2≡κ x^-3/2
=(1+q)^1/2M_⊙/M(α/0.1)^3(10/x)^3/2× 2 Hz.
From Eq. (<ref>),
ḟ=M^-5/3[3(1+q)^1 / 3/π^2 / 3 q] P(x) f^1 / 3 .
The deviation of P from P_gw could then be observed in the binary GW signal or through pulsar-timing, if the companion happens to be a pulsar. A characteristic measure for the frequency change is the braking index, which in the case of circular orbit can be written directly with the effective power:
n_b≡Ω̇Ω̈/Ω̇^2=5/3-2/3x ẍ/ẋ^2=5/3-2/3x P'+2 P/P .
For P=P_gw,0, n_b=11/3, while for P=P_DC and assuming an approximately constant mass depletion rate, n_b=1. Another useful measure is the overall GW dephasing
ΔΦ (t)=Φ(t)-Φ_GR(t)=2π∫_0^t dt' [f(t')-f_GR(t')] ,
where [0,t] is the time span of an observation, and f_GR(t) is given by the vacuum evolution.
§.§ Early Inspiral
For the companion to inspiral, the combined dissipation power from binary GW emission and the DF should overwhelm the negative power of cloud mass depletion, which is the case for a sufficiently small orbit radius or cloud mass. Typically the DF power (although weakened by P_DC) strongly dominate over P_GW for small radius with x>1 if the cloud mass is not extremely small, which could lead to a considerable amount of GW dephasing and a shorter merger time. The situation at larger radius is however state-dependent (see Fig. <ref> for examples of P(x)), among the six states we considered, for scalar 211, 322 and vector 2122 states there is an intermediate P_GW-domination spanning a broad range of radius (during which the DF is negligible), followed by a transition to P_DC-domination at even larger radius, while for the other states of vector and tensor atom (having much larger depletion rates) the region of P_GW-domination is negligible. Also, the power ratio (for q≪ 1):
P_DF/P_gw∼β p(α)/α^5x^5𝒫(x) , P_DC/P_gw∼β ^2p(α)/α^8qx^4 ,
shows that P_DC is enhanced for smaller α and q, but suppressed for smaller β relative to DF, while P_DC and P_DF are both β and α-suppressed comparing with P_gw.
§.§ Outspiral
Since P_DC always dominates at sufficiently large radius, there would be a critical radius beyond which the companion outspirals. The critical radii around GA in various SR states computed with the full power model are shown in Fig. <ref>. It's seen that for scalar 211, 322 and vector 2122 state, the critical radius is completely fixed by the balance between P_gw and P_DC such that
x_crit=[64/5q(1+q)^2α^8/p(α)β^2]^1/4 .
For states with large p(α) (vector 1011, tensor 1022 and tensor 2111), the critical radius is enlarged as compared to the result without DF since Eq. (<ref>) dives into smaller radius where orbit dissipation is stronger. By the same reason, this enlargement is stronger for smaller mass ratio q and larger cloud mass β. For small enough β, the critical radius is still given by Eq. (<ref>).
The transition to P_DC-domination turns out to be rather sharp, for initially x>x_crit (or even for x<x_crit before P_DF becomes important, e.g., for scalar GA in 211 and 322 state), the power is well approximated by P_gw+P_DC, the resulted orbit evolution is
ẋ = -Ax^-3+B/(1+t/τ_gw)^2x ,
A ≡64/5α^8 (1+q)q/M,B≡β^2 p(α)/M(1 + q).
With the initial condition x(0)=x_0, this equation admits an analytical solution :
x^4(t)= e^-4 B τ_gw^2/t+τ_gw{16 A B τ_gw^2 [Ei(4 B τ_gw^2/t+τ_gw)-Ei(4 B τ_gw)].
+e^4 B τ_gw(4 A τ_gw+x_0^4)}-4 A (t+τ_gw) ,
describing both inspiral and outspiral, where Ei(x)=-∫_-x^∞ dze^-z/z is the exponential integral. In time scale much shorter than τ_gw, β is approximately constant, it simplifies to
x(t)=[(x_0^4-A/B)e^4Bt+A/B]^1/4 .
The corresponding GW phase is given by
Φ(t)=2π∫_0^t dt'f(t')=
.
-4πκ/3B_2F_1(3/8,3/8;11/8;-A e^-4 B t/B x_0^4-A)/e^3Bt/2(x_0^4-A/B)^3/8|^t_0 ,
where κ≡(1+q)^1/2α^3/π M. If the orbit evolution is purely driven by cloud depletion, this simplifies to
Φ(t)=
-4πκ/3 B x_0^3/2(e^-3 B t/2-1) ,
while for ordinary binary inspiral with B=0, the phase is Φ_GR(t)=4πκ[ (4 A t+x_0^4)^5/8-x_0^5/2]/(5A).
Constraint on Resonances For outspiral, we see that x(t)<x_0e^Bt, then the cumulated orbit radius change Δ x<x_0 (e^Bt-1). Since B=β/τ_gw, this means roughly that the fractional orbit radius change after time τ_gw cannot be larger than β (using the full solution, we find this is actually a good estimation for the maximum value of Δ x/x attainable during the outspiral), so the circular binary during such outspiral would undergo little frequency change. This also implies that resonance event is unlikely to take place during the outspiral. For example, the resonance between scalar 211 and 21-1 states, a hyperfine transition, is at radius
x_*=[144(1+q)/χ^2]^1/3α^-2.
For this transition to happen we require at least that x_*<x_crit, which translates into a maximum cloud mass before the resonance
β_max=(64/5)^2 (144)^-2/3χ^4/3(1+q)^10/3q^2α^8p^-2.
For q=10^-3 and a=4α, β_max=1.3× 10^-3 if α=0.1 while β_max=5× 10^-6 if α=0.01.
Similar constraints can be put on the other possible transitions. The leading quadrupole transitions for the scalar 322 state are the Bohr transitions (defined by n' n) to 200,100, fine transition (defined by n'=n, l' l) to 300, and the hyperfine transition (defined by n'=n, l'=l, m' m) to 320; for vector 1011 state there are only Bohr transitions to 321-1,322-1,323-1 and 3233, similarly for tensor 1022 state to 3200, 3210, 3220, 3230, 3240 and 3244; for vector 2122 state, the Bohr transitions to 3100, 4320, 3110, 4330, 3120, 4340, 4344 and hyperfine transitions to 2100, 2110, 2120 are possible. As pointed out in <cit.>, the resonant orbit frequency for Bohr transition ∼𝒪(μα^2), corresponding to the orbit radius x∼𝒪((1+q)^1/3), which might invalidate the perturbative model of GA, so we consider only the fine and hyperfine transitions at larger radius, which are depicted in Fig.<ref>. β_max decreases with decreasing mass ratio q, and for transition with Δ n=Δ l=Δ j=0 also with decreasing α.
In this discussion we have neglected the possible corrections to ω_I of the SR state due to the companion. It was shown in <cit.> that below certain orbit radius the effective value of ω_I turns positive, hence the cloud gets re-absorbed. If this happens outside the resonance radius, the cloud may largely depletes well before the resonance. Therefore, such effects lead to additional constraints on the resonant events independent of the cloud mass (but state-dependent and generally depends on α,q,M). For outspiral the radius should be large enough so that this would not happen, otherwise the backreaction on orbit from the cloud re-absorption might prevent the outspiral in the first place.
Secular Evolution
Once the companion outspirals, it will not return until a sufficient depletion of the cloud. To track the long term orbit evolution we must inspect on the full solution Eq. (<ref>). For concreteness we consider GA in scalar 211 state with results showed in Fig. <ref>. Indeed, for small mass ratio the effect of cloud mass depletion is enhanced, slowing down the inspiral and for large enough cloud mass driving the companion to outspiral. For such large orbit radius the timescale of orbit evolution can be long enough so that it is even possible for the 322 state to have a considerable growth, which roughly happens within the instability time scale τ_I^(322). However, since the critical outspiral radius for 322 state is much larger than the 211 state (see Fig. <ref>), the growing of 322 state would not help to continue the outspiral.
§ OBSERVABILITY
We proceed to assess the detectability of the cloud-induced DC and DF effects (namely P-P_gw) on the orbit phase evolution. We focus on the direct observation of outspiral and the detecting threshold in the case of BH-Pulsar binary. From Eq. (<ref>) it can be seen that for a BH with M>10^4M_⊙, the signal for radius x>10 and α∼𝒪(0.1) is typically lower than the observation window (10^-4,10^-1)Hz of the space-borne gravitational wave interferometers. If the host BH of the cloud is lighter, however, the effect of DF and DC may be observed directly in the sensitive band of GW detectors, once such an event is detected.
For a very small mass ratio, q≪ 1, such as an intermediate mass ratio (10^-2<q<10^-4) or extreme mass ratio (q<10^-4) binary system, P_DF/P_gw is insensitive to q, while P_DC is enhanced by q^-1. For mass ratio as large as q∼ 1, our model is valid only for x ≫ 1, where DF is expected to be unimportant. In ordinary binary, the companion can be a stellar mass BH with M_*=10∼ 100M_⊙, but we can also consider the possibility of the companion being a very light primordial black hole (PBH). Note that if the mass of the PBH is too small, its own mass loss through Hawking evaporation may also need to be taken into account <cit.>.
There are no fundamental restrictions on the values of α and β except that β should be smaller than its initial saturated value, but α cannot be too small if we require a finite formation time of the cloud perhaps within the binary lifetime. For that we shall impose τ_I<10^6yr, which does not lead to substantive constraints, since both DF and DC effects are more significant for larger α. On the other hand, the maximum value of α gets constrained by requiring a sufficiently long cloud depletion time. τ_gw should be much larger than τ_I. Moreover, in <cit.> the bound τ_gw(β_0=α/m)>10^8yr was adopted to guarantee the stability of the cloud in astrophysical time scale. But the mass depletion could continue within a time much longer than τ_gw, just leading to smaller existing cloud mass. In the following we consider a relaxed bound with τ_gw(β_0=β)>10^4 yr, demonstrating how the detectable parameter space is squeezed by the requirement of a minimum depletion time scale.
§.§ Observing the Outspiral
As shown in the last section, the critical radius of outspiral is typically given by Eq. (<ref>). Requiring that f(x_crit)>10^-4 Hz we can estimate the mass range of the bosonic particle for which the outspiral can be directly observed by the space-borne GW detectors such as LISA and Taiji. The results are presented in Fig. <ref>, where it's seen that this mass range of μ for a given BH mass shrinks for larger M, the lower bound corresponds to f(x_crit)=10^-4 Hz and the upper bound comes from the constraints on x_crit and τ_gw, the latter being more stringent. The parameter space of vector 1011 and tensor 1022 state largely coincide, since they have a similar depletion rate. Actually,
x_crit^4∝α^8 τ_gw ^2 p(α)=M^2β^-2p^-1α^8.
Assuming a fixed value for the BH mass, for given τ_gw and α, the critical radius increases with p. But for given x_crit and β, α decreases with p. Then since the depletion rate of vector and tensor SR ground state for same α is larger than that of scalar, the minimum mass of vector or tensor boson supporting outspiral at given orbit frequency and host BH mass is lighter. On the other hand, for given τ_gw,β the mass of vector or tensor boson is also lighter than the scalar boson.
§.§ Pulsar Timing Detection
If the companion is a pulsar, pulsar timing provides an accurate way to measure the orbit evolution <cit.>. For definiteness, we choose the benchmark values M_*=1.6M_⊙. The detection threshold is <cit.>
|Φ(t)-Φ_GR(t)| > 4πσ(t)
with the uncertainty of phase measurement approximately given by
σ(t)=1/√(⌈ t / 1 day⌉)T_p/min(t_obs, t)
where ⌈⌉ is the ceiling function, t_obs the observation time per day and T_p the pulse period of the pulsar. The time span of observation t cannot exceed the duty time of the radio telescope, and it would also be shorter than the merger time if the companion undergoes an inspiral.
Assuming a large enough orbit evolution time scale, we then only expect to observe the quadratic phase change ΔΦ≈ 2π[f_0t+1/2(ḟ)_0 t^2], where
For a short enough time interval, the frequency evolution can be approximated as
f(t)-f_0≈ (ḟ)_0t=
M^-5/3[3(1+q)^1 / 3/π^2 / 3 q] P(x_0) f_0^1 / 3t .
The detectable region of (M,μ) for scalar 211 and vector 1011 states are presented in Fig. <ref> for a fiducial set of parameters, where we show the results with or without DF. The upper bound comes from the constraint on τ_gw. The DF turns out to be also relevant in this low frequency regime and signifies the inspiral, since for outspiral the DF only reduces ΔΦ. The other part of the detectable region does not depend on the DF, and could be either outspiral or inspiral (for scalar GA). For P≈ P_DC+P_gw, the condition of detection is explicitly given by
β^2p/M(1+q)>8σ/3t^2f_0 .
Then for outspiral, again, the minimum detectable boson mass in the vector case is lighter than the scalar case for given β and BH mass, and a more massive BH can probe lighter bosons. For higher orbit frequency, this threshold is lower, but the minimum boson mass is more severely constraint by the assumption x_0>10.
§ CONCLUSIONS AND OUTLOOKS
We have investigated the orbit evolution of a binary system containing a gravitational atom, based on a more comprehensive model for the binary's off-resonant orbit dynamics taking into account both the dynamical friction and the intrinsic mass depletion of the boson cloud. In this modeling, GA of spin-0,1,2 bosons are treated on equal footing, which enable us to contrast the differences of the binary's effective dissipative power for the different SR states.
One of our main motivations is to quantify the importance of the cloud mass depletion at large radius, which are relevant for early phase of binary evolution and also the resonance events. We find that DF could typically dominates in the small radius, but unless the GA is in the vector or tensor SR ground state with large enough cloud mass, the critical radius of outspiral is determined by the balance between cloud mass depletion and binary GW emission. By requiring the fine and hyperfine resonances to happen only during inspiral, upper limits are imposed on the cloud mass before the resonance, which can already be small for typical model parameters of the scalar 211 to 21-1 transition. We present an exact solution for the orbit evolution under P_DC+P_gw, showing that for binary system with a small mass ratio, even a small cloud mass could significantly slow down the inspiral process or even make the companion outspiral. This implies that not only the cloud mass depletion itself, but also the DC effects on the orbit evolution should be considered for a reasonable estimation of the cloud mass.
Comparing with the scalar SR state, the depletion rates of vector and tensor SR state are considerably larger for given α, leading to a smaller critical radius for outspiral. But the maximum value of α is also more severely constrained from the depletion time consideration. We estimate the detectability of outspiral in the sensitive band of space-borne GW detectors and also the pulsar timing detection threshold for general process, finding that the minimum detectable boson mass is lighter for a heavier host BH, and without DF the detectable vector/tensor boson mass is lighter than the scalar case. The inclusion of DF leads to additional detetctable parameter space corresponding to inspiral.
Our discussions are focused on circular orbit on the GA's equatorial plane, but as elaborated in Appendix. <ref>, this model can be extended straightforwardly to more general orbits. And the results are expected to be similar, especially when the DF is unimportant. For elliptical orbit, the critical semi-major axis of outspiral is given by Eq. (<ref>) with circular orbit radius r replaced by a and enlarged by f^1/4(e) where f(e) is the Peters-Mathews factor, as we checked the secular evolution of eccentricity remains very closed to the vacuum case. We have also neglected the possible matter accretion of the cloud's host BH from the background environment <cit.>, which would compete with the mass loss due to cloud mass depletion but may also lead to a larger cloud mass <cit.>.
Besides the GW emitted by the binary orbit motion, the GW emitted by the cloud, typically at higher frequency and even larger amplitude, see Fig. <ref>, may also be detectable <cit.>. A joint detection of the binary GW and cloud GW would be a distinctive signature of such systems, and would help to break the degeneracy of the mass change predicted by other scenarios.
Finally, even for the mass change within GA there are still other possibilities, e.g., for complex bosonic fields the cloud may have negligible GW emission <cit.>, while self-interaction could lead to additional mass loss <cit.>, which could enrich the phenomenology discussed above.
In view of these, we hope to return to this subject in the future with a systematic investigation of the general orbits taking into account the environmental accretion effects, the cloud ionization and possibly other mass loss mechanisms, and the effects on the GW waveforms <cit.>.
§ ACKNOWLEDGEMENT
This work is supported by National Key Research and Development Program of China (Grant No.2021YFC2201901), and National Natural Science Foundation of China under Grants No.12147103 and 11851302.
§ WAVE FUNCTION OF GRAVITATIONAL ATOM
The orthonormalized wave functions of states |nljm⟩ are given by
Ψ_nljm(t,𝐫)=R_nl(r)𝐘_ljm(θ,ϕ)e^-i(ω_nljm-μ)t ,
where R_nl(r)≡ r_c^3/2R_nl(x) (with x≡ r/r_c) is the hydrogenic radial function:
R_n l(x)= √((2/n)^3 (n-l-1) !/2 n(n+l) !)(2 x/n)^l
e^-x/n L_n-l-1^2 l+1(2 x/n) ,
For scalar fields, the angular function of mode |nlm⟩ is given by the spherical harmonics Y_lm(θ,ϕ). For vector field, the angular function of mode |nljm⟩ is given by the purely-orbital vector spherical harmonics <cit.>:
Y_ljm^i=∑_m_l=-l^l∑_m_s=-1^1⟨(1, m_s),(l, m_l) | j, m⟩ξ_i^m_s Y_l m_l(θ, ϕ) ,
where ξ^0=ẑ, ξ^± 1=∓1/√(2)(x̂± i ŷ) is a set of orthonormal polarization basis, and ⟨(1, m_s),(l, m_l) | j, m⟩ are the Clebsch-Gordan coefficients. For tensor fields, the angular function of mode |nljm⟩ is given by the purely-orbital spin-2 tensor spherical harmonics <cit.>:
Y_ljm^ik
=∑_m_l=-l^l∑_m_s=-2^2⟨(2, m_s),(l, m_l) | j, m⟩ t_ik^m_s Y_l m_l(θ, ϕ) ,
with t_ik^m_s=∑_m_1,m_2=-1^1⟨(1, m_1),(1, m_2) | 2, m_s⟩ξ_i^m_1ξ_k^m_2. The analytical results for the spectra ω_nl(j)m of spin-0,1,2 GA (which are accurate for small α) an be found in <cit.>. The velocity of a Schrödinger field Ψ=|Ψ|e^is is given by 𝐮=1/μ∇ s=i/2μ |Ψ|^2(Ψ∇Ψ^*-Ψ^*∇Ψ). For scalar state, s∝ mϕ, so 𝐮=m/μ rsinθ. For vector and tensor state with azimuthal quantum number m, the field components Ψ_i∝ e^i m_iϕ generally have distinct values of m_i.
In the nonrelativistic Newtonian limit, the system can be described by the Lagrangian (neglecting self-gravity)
ℒ=
M_c/μTr[(i Ψ^* Ψ̇+c.c.)-1/2μ∇Ψ^* ·∇Ψ+α/r|Ψ|^2],
which leads to the Schrödinger equation for each field component. But the energy-momentum tensor should be obtained from the relativistic Lagrangian. The equation of motion for massive scalar, vector and spin-2 field read
□ϕ =μ^2ϕ ,
□ A_b-R_cbA^c =μ^2 A_b ,
□ H_ab+2R_acbdH^cd =μ^2 H_ab ,
with A^b_lb=0 and H^ab_;a=H^a_a=0, in the non-relativistic limit A^0,H^0b≈ 0. Each field component satisfies the Klein-Gordan equation in the flat background, hence the system is a set of independent scalar fields. The nonrelativistic limit of the energy-momentum tensor is
T_ij=M_c/μTr[1/2μΨ_, iΨ_, j^*+c.c.] ,
T_i0=M_c/μTr[-i/2ΨΨ_, i^*+c.c.] ,
The density profile is ρ=T_00=M_c Tr(ΨΨ^*)=M_c/r_c^3g, with g=R_nl^2(x)Tr(Y_ljmY^*_ljm). The cloud's scalar mass quadrupole moment is given by
Q_c=∫ d^3 r ρ(𝐫) r^2 P_2(cosθ)
For scalar 211 state (same as vector 2122 and tensor 2133 state), g=1/64x^2e^-xsin^2θ, Q_c=-6M_cr_c^2. For scalar 322 state, g=x^4 e^-2 x/3sin ^4θ/26244 π, Q_c=-36M_cr_c^2. For the scalar 433 state, g=e^-x/2 x^6 sin ^6θ/37748736 π, Q_c=-120M_cr_c^2. For vector 1011 state (same as tensor 1022 state), g=1/πe^-2x, Q_c=0. For tensor 2111 state, g=13-cos 2θ/1280πx^2e^-x, Q_c=-3/5M_cr_c^2.
§ CORRECTION TO M_C AND P_GW
The effect of mass depletion on the binary dynamics originates from the cloud's gravitational force on the companion. As leading order approximation we consider the cloud in a flat background. Due to axisymmetry, the Newtonian potential sourced by the cloud can be expanded in terms of the Legendre polynomials:
Φ(r, θ)=∑_n=0^∞Φ_n(r) P_n(cosθ)
≡M_c/r_cΦ(x,θ) ,
where
Φ_n(r)= -2 π/(n+1 / 2) r^n+1∫_0^r (r')^n+2ρ_n(r^') d r^'
-2 π r^n/n+1 / 2∫_r^∞ (r')^1-nρ_n(r^') d r^' ,
and
ρ_n(r)=(n+1 / 2) ∫_0^πρ(r, θ) P_n(cosθ) sinθ d θ .
Including the cloud's gravity, the binary orbit energy is
E=(1+q)M v^2/2+M_*Φ-M_*M/r ,
so we can define an effective cloud mass
M̃_c=-Φ r .
For θ=π/2, Φ(r,θ)=Φ(r), the results for several states are depicted in Fig. <ref>. It can be seen that the deviation of M̃_c from M_c is small. Interestingly, for non-spherical state M̃_c/M_c is not a monotonic function of radius.
Since M̃_c depends on radius, d/dtM̃_c=M̃_c/M_cṀ_c+(M̃_c/M_c)_,xẋ, but the latter contribution is even smaller since it is a second order effect due to orbit evolution. From the mass change of the cloud, the binary GW radiation power P_gw=1/5⃛I̅_ij⃛I̅_ij also receives a correction via the change of binary mass quadrupole I_ij=M_*/1+qx_ix_j with q=M_*/M+M_c, but we have checked that it is completely negligible for any reasonable parameters.
§ GENERAL ORBIT
In this appendix we outline the Newtonian analysis for a general inclined elliptical orbit. In the untilted BH-centered frame (x,y,z) with z axis parallel to the BH spin, the companion's coordinate position is (r,θ,ϕ), and we denote its position in the tilted BH-centered frame (X,Y,Z) with rotated X axis and Z axes parallel to the orbit normal to be (r,i,φ), where i is the orbit plane's inclination angle (relative to the BH's equatorial plane) and φ the true anomaly. The two set of coordinates are related by cosθ=sin icosφ, sinθ=√(sin^2φ+cos^2icos^2φ), and tanϕ= itanφ. The cloud's density distribution in the orbit plane is then given by ρ(r,θ(φ)).
For elliptical orbit, the convenient parameterization is
r=a(1-e^2)/1+ecosφ=a(1-ecos z),
z-esin z=Ω t,
where a is the semi-major axis, e the eccentricity, z∈ [0,2π] the eccentricity anomaly. The orbit velocity, energy and angular momentum are given by (M_tot=M_1+M_2)
v =aΩ√(1+ecos z/1-ecos z) ,
E =-M_1M_2/2a ,
L =√((M_1M_2)^2/M_tota(1-e^2)) ,
with the Kepler relation Ω=√(M_tot/a^3), the radial and azimuthal velocity are
v_r=√(M_tot/a(1-e^2))esinφ,
v_φ=√(M_tota(1-e^2))/r.
The orbit evolution is given by
-Ė =P_gw+P_DF+P_DC
, L̇ =(L̇)_gw+(L̇)_DF+(L̇)_DC,
leading to evolution of a and e. For a perturbed force (per unit mass) 𝐅=F_φ𝐞_φ+F_r𝐞_r+F_z𝐞_z (where 𝐞_z is perpendicular to the orbit plane) acted on the system,
Ė/M_tot = ∫_0^Tdt/T 𝐯·𝐅
, L̇/M_tot = ∫_0^Tdt/T 𝐫×𝐅,
The secular evolution under 𝐅 is given by <cit.>
ȧ/a =2/Ω{e sinφ/a √(1-e^2) F_r+√(1-e^2)/r F_φ},
ė =√(1-e^2)/a Ω{(cosφ+cos z) F_φ+sinφ F_r},
χ̇ =√(1-e^2)/a e Ω{[1+r/a (1-e^2)] sinφ F_φ-cosφ F_r}.
To see the effect of mass depletion, consider the average loss of binary potential energy due to Ṁ_1 <cit.>,
Ė = -∫_0^Tdt/T Ṁ_1 M_2/r.
If the mass depletion rate is constant, Ė = -Ṁ_1 M_2/2a+M_1M_2/2a^2ȧ=-Ṁ_1 M_2/a, which leads to ȧ=-Ṁ_1/M_1a^2. In this treatment, if both masses vary slowly with time, ȧ=-(Ṁ_1/M_1+Ṁ_2/M_2)a^2. A more careful treatment <cit.> assuming the mass loss is isotropic (i.e., the mass change itself does not carry away linear momentum) would show that the mass change manifests as a force (per unit mass) 𝐅_DC=-1/2Ṁ_tot/M_tot𝐯, and leads instead to ȧ=-Ṁ_tot/M_tota^2. For circular orbit with Ṁ_2=0 this corresponds to the average loss rate of orbital angular momentum due to mass depletion (with M_2=M_*, M_1=M+M̃_c and q=M_2/M_1):
(L̇_orb)_DC=
M_2^2/M_tot^3/2Ṁ_1√(r) ,
which is negligible only for q≪ 1. In comparison, if the orbit angular momentum is conserved during the process of mass change, the resulted effective power is
P(x)=qα^2/2(1+q)xṀ_1(1+2q)+Ṁ_2(1+2q^-1)/M_tot .
As seen from Eq. (<ref>), the isotropic mass change (averaged over time) affects neither the eccentricity evolution nor the periastron precession. Note that M̃_c has a (although weak) radius-dependence, this should be taken into account for orbit with high eccentricity but its effect is much more negligible for circular orbit.
The dynamical friction also does not contribute to the periastron shift χ̇, while a non-zero mass quadrupole leads to precession:
χ̇_Q=3 Ω/4 a^2(1-e^2)^2Q_c/M_tot(1-3 cos ^2 i).
This is relevant e.g., for the scalar 211/322, vector 2122 and tensor 2111 states, such precession can be used to search for GA in elliptical binaries <cit.>, though for small radius the non-resonant mode-mixing effects should be taken into account simultaneously.
Finally we discuss how to incorporate the mass depletion effects in the ionization and resonance process. If we identify the DF with ionization, the orbit dynamics can be set up using the ionization model of <cit.>. But the eccentricity and inclination introduce additional complexities <cit.>. For simplicity, we consider only the equatorial plane circular orbit and the ionization of a single bound mode. The cloud mass changes as Ṁ_c=-P_gw,c-Ṁ_*-M_c∑_g[μ |η|^2/kΘ(k)]_g, where η is the mixing matrix element between the single bound state being ionized and the continuum state k^(g), the orbit angular momentum changes due to ionization according to (L̇_orb)_ion=-∑_g(m+g)[μ |η|^2/kΘ(k)]_g. Together with S_c=m M_c/μ, and the angular momentum balance L̇_orb+ (Ṡ_c+m/μP_gw,c)=-P_gw/Ω+(L̇_orb)_ion+(L̇_orb)_DC (assuming the accretion process giving rise to Ṁ_* does not change the total angular momentum), the contribution to the effective power from ionization and Ṁ_* is then <cit.>
P_ion(x) = M_cΩ/μ∑_g g[μ |η|^2/kΘ(k)]_g
+MΩ/αṀ_* [(2+q)√(x)/2(1+q)^3/2∓ m] ,
where ± stands for co-rotating/counter-rotating orbit. The first term in the second line is same as Eq. (<ref>), while the second term comes from ionization. For small mass ratio and large orbit radius, the second line is nothing but Ṁ_* v^2, see also Sect. <ref>.
For the orbit evolution during resonance (again restricting to equatorial plane circular orbit, and neglecting the accretion), from angular momentum balance we get
ṙ=(ṙ)_others∓ 2(1+q)^1 / 2 q^-1 r^1 / 2 M^-3/2([Ṡ_c]_eff+J̇) .
Here (ṙ)_others includes all effects other than the cloud-orbit angular momentum exchange, J is the BH spin, S_c=M_c/μ∑_i m_i|c_i|^2 is the cloud's angular momentum, evolving according to the mixing Hamiltonian, i ċ_i=H_i j c_j. [Ṡ_c]_eff is Ṡ_c with the contribution from cloud depletion removed. The mass and angular momentum conservation of the SR process yields Ṁ+∑_i 2ω_I^(i)M_c|c_i|^2=0 and J̇+∑_i 2ω_I^(i)M_c/μm_i|c_i|^2=0. For hyperfine transitions the variation of BH mass and spin can be important <cit.>. Due to its long time scale, the cloud mass depletion is not expected to play any roles in the “quantum dynamics” of c_i (which is mainly driven by the oscillating gravitational perturbation), but its orbit effect through P_DC may still be relevant, e.g., for the floating orbits.
|
http://arxiv.org/abs/2307.04107v1 | 20230709062020 | Efficient Approximation Algorithms for Scheduling Coflows with Precedence Constraints in Identical Parallel Networks to Minimize Weighted Completion Time | [
"Chi-Yeh Chen"
] | cs.DS | [
"cs.DS"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper focuses on the problem of coflow scheduling with precedence constraints in identical parallel networks, which is a well-known 𝒩𝒫-hard problem. Coflow is a relatively new network abstraction used to characterize communication patterns in data centers. Both flow-level scheduling and coflow-level scheduling problems are examined, with the key distinction being the scheduling granularity. The proposed algorithm effectively determines the scheduling order of coflows by employing the primal-dual method. When considering workload sizes and weights that are dependent on the network topology in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG). Additionally, when taking into account workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight. For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent. Moreover, when considering workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
Scheduling algorithms, approximation algorithms, coflow, precedence constraints, datacenter network, identical parallel network.
§ INTRODUCTION
With the evolution of technology, a large volume of computational demands has become the norm. As personal computing resources are no longer sufficient, cloud computing has emerged as a solution for accessing significant computational resources. With the increasing demand, large-scale data centers have become essential components of cloud computing. In these data centers, the benefits of application-aware network scheduling have been proven, particularly for distributed applications with structured traffic patterns <cit.>. The widespread use of data-parallel computing applications such as MapReduce <cit.>, Hadoop <cit.>, Dryad <cit.>, and Spark <cit.> has led to a proliferation of related applications <cit.>.
In these data-parallel applications, tasks can be divided into multiple computational stages and communication stages, which are executed alternately. The computational stages generate a substantial amount of intermediate data (flows) that needs to be transmitted across various machines for further processing during the communication stages. Due to the large number of applications generating significant data transmission requirements, robust data transmission and scheduling capabilities are crucial for data centers. The overall communication pattern within the data center can be abstracted by coflow traffic, representing the interaction of flows between two sets of machines <cit.>.
A coflow refers to a set of interconnected flows, where the completion time of the entire group depends on the completion time of the last flow within the set <cit.>. Previous studies related to coflows <cit.> have primarily focused on the single-core model <cit.>. However, technological advancements have led to the emergence of data centers that operate on multiple parallel networks in order to improve efficiency <cit.>. One such architecture is the identical or heterogeneous parallel network, where multiple network cores function in parallel, providing combined bandwidth by simultaneously serving traffic.
This study addresses the problem of coflow scheduling with precedence constraints in identical parallel networks. The objective is to schedule these coflows in the parallel networks in a way that minimizes the weighted total completion time of coflows. We consider both flow-level scheduling and coflow-level scheduling. In the flow-level scheduling problem, flows within a coflow can be distributed across different network cores. Conversely, in the coflow-level scheduling problem, all flows within a coflow are required to be transmitted in the same network core. The key difference between these two problems lies in their scheduling granularity. The coflow-level scheduling problem, being a coarse-grained scheduling, can be quickly solved but yields relatively poorer results. On the other hand, the flow-level scheduling problem, being a fine-grained scheduling, takes more time to solve but produces superior scheduling results. It is worth noting that, although these two problems exhibit differences in time complexity when solved using linear programming, in the case of the flow-level scheduling problem using the primal-dual method, the decision of scheduling flows is transformed into the decision of scheduling coflows. This transformation leads to the solving time being equivalent to that of the coflow-level scheduling problem.
§.§ Related Work
The concept of coflow abstraction was initially introduced by Chowdhury and Stoica <cit.> to characterize communication patterns within data centers. The scheduling problem for coflows has been proven to be strongly 𝒩𝒫-hard, indicating the need for efficient approximation algorithms rather than exact solutions. Due to the easy reduction of the concurrent open shop problem to coflow scheduling, where only the diagonal elements of the demand matrix have values, solving the concurrent open shop problem within a factor better than 2-ϵ is 𝒩𝒫-hard <cit.>, implying the hardness of the coflow scheduling problem as well.
Since the proposal of the coflow abstraction, extensive research has been conducted on coflow scheduling <cit.>. Qiu et al.<cit.> presented the first deterministic polynomial-time approximation algorithm with an ratio of 67/3. Subsequently, Ahmadi et al. <cit.> proved that the technique proposed by Qiu et al.<cit.> actually yields only a deterministic 76/3-approximation algorithm for coflow scheduling with release times.
Khuller et al. <cit.> also proposed an approximation algorithm for coflow scheduling with arbitrary release times, achieving a ratio of 12.
Recent research by Shafiee and Ghaderi <cit.> has resulted in an impressive approximation algorithm for the coflow scheduling problem, achieving an approximation ratio of 5. Additionally, Ahmadi et al. <cit.> have made significant contributions to this field by proposing a primal-dual algorithm that enhances the computational efficiency of coflow scheduling.
In the coflow scheduling problem within a heterogeneous parallel network, Huang et al. <cit.> introduced an O(m)-approximation algorithm, where m represents the number of network cores. On the other hand, Tian et al. <cit.> were the first to propose the problem of scheduling coflows of multi-stage jobs, and they provided a O(N)-approximation algorithm, where N represents the number of servers in the network. Furthermore, Shafiee and Ghaderi <cit.> proposed a polynomial-time algorithm that achieves an approximation ratio of O(χ̃log(N)/log(log(N))), where χ̃ denotes the maximum number of coflows in a job.
§.§ Our Contributions
This paper focuses on addressing the problem of coflow scheduling with precedence constraints in identical parallel networks and presents a range of algorithms and corresponding results. The specific contributions of this study are outlined below:
* When considering workload sizes and weights that are dependent on the network topology in the input instances, the proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG).
* When taking into account workload sizes that are topology-dependent, the proposed algorithm for flow-level scheduling problem achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight.
* For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent.
* When considering workload sizes that are topology-dependent, the algorithm for the coflow-level scheduling problem achieves an approximation ratio of O(Rmχ).
* In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ).
A summary of our theoretical findings is provided in Table <ref> where TDWS stands for topology-dependent workload sizes, while TDW stands for topology-dependent weights.
§.§ Organization
The structure of this paper is outlined as follows. In Section <ref>, an introduction is provided, covering fundamental notations and preliminary concepts that will be referenced in subsequent sections. Following that, the primary algorithms are presented in the following sections: Section <ref> provides an overview of the algorithm addressing the flow-level scheduling problem, while Section <ref> elaborates on the algorithm designed for the coflow-level scheduling problem. To address the scheduling problem for the coflows of multi-stage jobs, our algorithm is discussed in Section <ref>. In Section <ref>, a comparative analysis is conducted to evaluate the performance of our proposed algorithms in comparison to the previous algorithm. Lastly, in Section <ref>, our findings are summarized and meaningful conclusions are drawn.
§ NOTATION AND PRELIMINARIES
The identical parallel network consists of a collection of m non-blocking switches, each with dimensions of N × N. These switches form the infrastructure of the network, where N input links are connected to N source servers, and N output links are connected to N destination servers. These switches serve as practical and intuitive models for the network core. Network architectures such as Fat-tree or Clos <cit.> can be employed to construct networks that provide complete bisection bandwidth. In this configuration, each switch's i-th input port is connected to the i-th source server, and the j-th output port is connected to the j-th destination server. Consequently, each source server (or destination server) has m simultaneous uplinks (or downlinks), where each link may consist of multiple physical connections in the actual network topology <cit.>. Let ℐ denote the set of source servers, and 𝒥 denote the set of destination servers. The network core can be visualized as a bipartite graph, with ℐ on one side and 𝒥 on the other. For simplicity, we assume that all network cores are identical, and the links within each core have the same capacity or speed.
A coflow is a collection of independent flows, and its completion time of a coflow is determined by the completion time of the last flow in the set, making it a critical metric for evaluating the efficiency of data transfers. The demand matrix D^(k)=(d_i,j,k)_i,j=1^N represents the specific data transfer requirements within coflow k. Each entry d_i,j,k in the matrix corresponds to the size of the flow that needs to be transmitted from input i to output j within the coflow. In the context of identical network cores, the flow size can be interpreted as the transmission time, as all cores possess the same capacity or speed. This simplification allows for easier analysis and optimization of coflow scheduling algorithms. To facilitate efficient management and routing of flows, each flow is identified by a triple (i, j, k), where i represents the source node, j represents the destination node, and k corresponds to the coflow. This identification scheme enables precise tracking and control of individual flows within the parallel network.
Furthermore, we assume that flows are composed of discrete data units, resulting in integer sizes. For simplicity, we assume that all flows within a coflow are simultaneously initiated, as demonstrated in <cit.>.
This paper investigates the problem of coflow scheduling with release times and precedence constraints. The problem involves a set of coflows denoted by 𝒦, where coflow k is released into the system at time r_k. The completion time of coflow k, denoted as C_k, represents the time required for all its flows to finish processing. Each coflow k∈𝒦 is assigned a positive weight w_k. Let R be the ratio between the maximum weight and the minimum weight. The relationships between coflows can be modeled using a directed acyclic graph (DAG) G=(𝒦, E), where an arc (k', k)∈ E and k', k∈𝒦 indicate that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. This relationship is denoted as k'≺ k. The DAG has a coflow number of χ, which represents the length of the longest path in the DAG. The objective is to schedule coflows in an identical parallel network, considering the precedence constraints, in order to minimize the total weighted completion time of the coflows, denoted as ∑_k∈𝒦 w_kC_k. For clarity, different subscript symbols are used to represent different meanings of the same variables. Subscript i represents the index of the source (or input port), subscript j represents the index of the destination (or output port), and subscript k represents the index of the coflow. For instance, ℱ_i denotes the set of flows with source i, and ℱ_j represents the set of flows with destination j. The symbols and terminology used in this paper are summarized in Table <ref>.
§ APPROXIMATION ALGORITHM FOR THE FLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the flow-level scheduling problem, which allows for the transmission of different flows within a coflow through distinct network cores. We assume that coflows are transmitted at the flow level, ensuring that the data within a flow is allocated to the same core. We define ℱ_i as the collection of flows with source i, represented by ℱ_i={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ j∈𝒥}, and ℱ_j as the set of flows with destination j, given by ℱ_j={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ i∈ℐ}. For any subset S⊆ℱ_i (or S⊆ℱ_j), we define d(S)=∑_(i, j, k)∈ S d_i,j,k as the sum of data size over all flows in S and d^2(S)=∑_(i, j, k)∈ S d_i,j,k^2 as the sum of squares of data size over all flows in S. Additionally, we introduce the function f(S) as follows:
f(S) = d(S)^2+ d^2(S)/2m.
The flow-level scheduling problem can be formulated as a linear programming relaxation, which is expressed as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ C_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ r_k+d_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ C_k'+d_i,j,k, ∀ k, k'∈𝒦:k'≺ k,
∀ i∈ℐ, ∀ j∈𝒥
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ i∈ℐ, ∀ S⊆ℱ_i
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ j∈𝒥, ∀ S⊆ℱ_j
In the linear program (<ref>), the variable C_k represents the completion time of coflow k in the schedule, and C_i,j,k denotes the completion time of flow (i, j, k). Constraint (<ref>) specifies that the completion time of coflow k is bounded by the completion times of all its flows, ensuring that no flow finishes after the coflow. Constraint (<ref>) guarantees that the completion time of any flow (i, j, k) is at least its release time r_k plus the time required for its transmission. To capture the precedence constraints among coflows, constraint (<ref>) indicates that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. Constraints (<ref>) and (<ref>) introduce lower bounds on the completion time variables at the input and output ports, respectively.
We define L_i,S,k as the sum of the loads on input port i for coflow k in the set S. Similarly, L_j,S,k represents the sum of the loads on output port j for coflow k in the set S. To formulate the dual linear program, we have the following expressions:
L_i,S,k =∑_(i',j',k')∈ S|i'=i,k'=kd_i',j',k',
L_j,S,k =∑_(i',j',k')∈ S|j'=j,k'=kd_i',j',k'.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_k, ∀ k∈𝒦
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ,
∀ j∈𝒥
It is important to note that each flow (i, j, k) is associated with a dual variable α_i, j, k, and for every coflow k, there exists a corresponding constraint. Additionally, for any subset S ⊆ℱ_i (or S ⊆ℱ_j) of flows, there exists a dual variable β_i, S (or β_j, S). To facilitate the analysis and design of algorithms, we define γ_k', k as the sum of γ_k', i, j, k over all input ports i and output ports j in their respective sets ℐ and 𝒥:
γ_k', k=∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k.
Significantly, it should be emphasized that the cost of any feasible dual solution provides a lower bound for OPT, which represents the cost of an optimal solution.
This implies that the cost attained by any valid dual solution ensures that OPT cannot be less than that. In other words, if we obtain a feasible dual solution with a certain cost, we can be certain that the optimal solution, which represents the best possible cost, will not have a lower cost than the one achieved by the dual solution.
The primal-dual algorithm, as depicted in Appendix <ref>, Algorithm <ref>, is inspired by the research of Davis et al. <cit.> and Ahmadi et al. <cit.>, respectively. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
The flow-driven-list-scheduling algorithm, as depicted in Algorithm <ref>, leverages a list scheduling rule to determine the order of coflows to be scheduled. In order to provide a clear and consistent framework, we assume that the coflows have been pre-ordered based on the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. Thus, the coflows are scheduled sequentially in this predetermined order.
Within each coflow, the flows are scheduled based on a non-increasing order of their sizes, breaking ties arbitrarily. Specifically, for every flow (i, j, k), the algorithm identifies the least loaded network core, denoted as h^*, and assigns the flow (i, j, k) to this core.
The algorithm's steps involved in this assignment process are outlined in lines <ref>-<ref>.
A flow is deemed "ready" for scheduling only when all of its predecessors have been fully transmitted. The algorithm then proceeds to schedule all the flows that are both ready and have been released but remain incomplete. These scheduling steps, encapsulated in lines <ref>-<ref>, have been adapted from the work of Shafiee and Ghaderi <cit.>.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(χ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
Let S_k={1, 2, …, k} denote the set of the first k coflows. Furthermore, we define S_i,k as the set of flows from the first k coflows at input port i. Formally, S_i,k is defined as follows:
S_i,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ j∈𝒥}.
Similarly, S_j,k represents the set of flows from the first k coflows at output port j, defined as:
S_j,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ i∈ℐ}.
Let β_i,k=β_i,S_i,k and β_j,k=β_j,S_j,k. These variables capture the dual variables associated with the sets S_i,k and S_j,k.
Moreover, we introduce the notation μ_1(k) to denote the input port with the highest load in S_k, and μ_2(k) to represent the output port with the highest load in S_k. Recall that d(S) represents the sum of loads for all flows in a subset S. Therefore, d(S_i,k) corresponds to the total load of flows from the first k coflows at input port i, and d(S_j,k) corresponds to the total load of flows from the first k coflows at output port j.
Finally, let L_i,k=∑_j∈𝒥 d_i,j,k denote the total load of flows from coflow k at input port i, and L_j,k=∑_i∈ℐ d_i,j,k denote the total load of flows from coflow k at output port j.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_μ_1(k),k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_1(k),k)/m.
* For every set S_μ_2(k),k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k, r_k>κ· d(S_μ_1(k),k)/m.
* For every coflow k that has a nonzero α_1, μ_2(k), k, r_k>κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k or a nonzero α_1, μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that d(S)^2≤ 2m· f(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(d(S_μ_1(k),k)+d(S_μ_2(k),k)/m)+(1-2/m)C_k^*, where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ 1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k)+(1-2/m) max_i, j d_i,j,k
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f1/md(S_μ_1(q), q) + 1/md(S_μ_2(q),q) +(1-2/m) max_i, j d_i,j,q
≤ ∑_q=1^f1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k) +(1-2/m) max_i, j d_i,j,q
= f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k) +∑_q=1^f(1-2/m) max_i, j d_i,j,q
≤ f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k)+ (1-2/m) C_k^*.
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n w_k· A +(1-2/m) ∑_k=1^n w_kC_k^*
where A=a·max_k'≤ kr_k'+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m. We have ∑_k=1^n w_k C_k^*=OPT. Now we focus on the first term ∑_k=1^n w_k· A. By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_k· A = ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k(a·r_k+2χ·r_k/κ)
≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_ℓ≤kr_ℓ+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m)
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·d(S_μ_1(k'),k')/m + 2χ·d(S_μ_1(k),k)/m)
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kd(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kd(S_μ_1(k'),k')/m
= (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'd(S_i,k')d(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'(d(S_μ_1(k'),k'))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ)∑_i ∈ℐ∑_k=1^nβ_i,kf(S_μ_1(k),k)
= 2(a·κ+2χ)∑_k=1^nβ_μ_1(k),kf(S_μ_1(k),k)
≤ 2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+2-2/m for the flow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ+1)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+(4χ+1)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+(4χ+1)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+2-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+1-2/m for the flow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2· 2χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2· 2χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ 4χ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+4χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+4χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+1-2/m)· OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
We demonstrate the case of L_μ_1(r)>L_μ_2(r), while the other case of L_μ_1(r)≤ L_μ_2(r) can be obtained using the same approach, yielding the same result. If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0.
Suppose coflow p is replaced by coflow k through the adjustment of γ_k',k.
Let
B=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k,
B_p=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,p+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,p,
H=∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k',
H_p=∑_k'∈𝒦|(k',p)∈ Eγ_k',p-∑_k'∈𝒦|(p,k')∈ Eγ_p,k',
R'=w_k/w_p.
If coflow k undergoes the adjustment of the order by setting γ_k',k, then
H = w_k-B-L_i,k/L_i,p(w_p-B_p-H_p)
≤ w_k-B-w_p+B_p+H_p
≤ w_k-w_p+H_p
≤ w_k-w_p
= R'-1/R'w_k
≤ R-1/Rw_k
The inequalities (<ref>) and (<ref>) are due to L_i,p≤ L_i,k for all i ∈ℐ. The inequality (<ref>) is due to
H_p≤ 0. Based on Lemma <ref>, we know that ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
H ≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
According to lemma <ref>, we have
∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+R+1-2/m for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+1-2/m for the flow-level scheduling problem without release times.
§ APPROXIMATION ALGORITHM FOR THE COFLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the coflow-level scheduling issue, which pertains to the transmission of flows within a coflow via a single core. It is important to remember that L_i,k=∑_j=1^Nd_i,j,k and L_j,k=∑_i=1^Nd_i,j,k, where L_i,k denotes the overall load at source i for coflow k, and L_j,k denotes the overall load at destination j for coflow k.
Let
f_i(S) = ∑_k∈ S L_i,k^2+(∑_k∈ S L_i,k)^2/2m
and
f_j(S) = ∑_k∈ S L_j,k^2+(∑_k∈ S L_j,k)^2/2m
for any subset S⊆𝒦.
To address this problem, we propose a linear programming relaxation formulation as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ r_k+L_i,k, ∀ k∈𝒦, ∀ i∈ℐ
C_k≥ r_k+L_j,k, ∀ k∈𝒦, ∀ j∈𝒥
C_k≥ C_k'+L_ik, ∀ k, k'∈𝒦, ∀ i∈ℐ:
k'≺ k
C_k≥ C_k'+L_jk, ∀ k, k'∈𝒦, ∀ j∈𝒥:
k'≺ k
∑_k∈ SL_i,kC_k≥ f_i(S) ∀ i∈ℐ, ∀ S⊆𝒦
∑_k∈ SL_j,kC_k≥ f_j(S) ∀ j∈𝒥, ∀ S⊆𝒦
In the linear program (<ref>), the completion time C_k is defined for each coflow k in the schedule. Constraints (<ref>) and (<ref>) ensure that the completion time of any coflow k is greater than or equal to its release time r_k plus its load. To account for the precedence constraints among coflows, constraints (<ref>) and (<ref>) indicate that all flows of coflow k' must be completed before coflow k can be scheduled. Additionally, constraints (<ref>) and (<ref>) establish lower bounds for the completion time variable at the input and output ports, respectively.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k+L_i,k)
+∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k+L_j,k)
+∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐγ_k', i, k L_i,k
+ ∑_(k', k) ∈ E∑_j ∈𝒥γ_k', j, k L_j,k <ref>
s.t. ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k
+∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k
+∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k
+∑_(k',k)∈ E∑_i ∈ℐγ_k', i, k
+∑_(k',k)∈ E∑_j ∈𝒥γ_k', j, k
-∑_(k,k')∈ E∑_i ∈ℐγ_k, i, k'
-∑_(k,k')∈ E∑_j ∈𝒥γ_k, j, k'≤ w_k, ∀ k∈𝒦
α_i, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ
α_j, k≥ 0, ∀ k∈𝒦, ∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆𝒦
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆𝒦
γ_k', i, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ
γ_k', j, k≥ 0, ∀ (k', k)∈ E, ∀ j∈𝒥
Let γ_k', k=∑_i ∈ℐγ_k', i, k+∑_j ∈𝒥γ_k', j, k. Notice that for every coflow k, there exists two dual variables α_i, k and α_j, k, and there is a corresponding constraint. Additionally, for every subset of coflows S, there are two dual variables β_i, S and β_j, S. For the precedence constraints, there are two dual variables γ_k', k and γ_k, k'. Algorithm <ref> in Appendix <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
The coflow-driven-list-scheduling, as outlined in Algorithm <ref>, operates as follows. To ensure clarity and generality, we assume that the coflows are arranged in an order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. We schedule all the flows within each coflow iteratively, following the sequence provided by this list.
For each coflow k, we identify the network core h^* that can transmit coflow k in a manner that minimizes its completion time (lines <ref>-<ref>). Subsequently, we transmit all the flows allocated to network core h (lines <ref>-<ref>).
In summary, the coflow-driven-list-scheduling algorithm works by iteratively scheduling the flows within each coflow, following a predetermined order. It determines the optimal network core for transmitting each coflow to minimize their completion times, and then transmits the allocated flows for each core accordingly.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
We would like to emphasize that S_k={1, 2, …, k} represents the set of the first k coflows. We define β_i,k=β_i,S_k and β_j,k=β_j,S_k for convenience. Moreover, we define L_i(S_k)=∑_k'≤ k L_i, k' and L_j(S_k)=∑_k'≤ k L_j, k' to simplify the notation. Furthermore, let μ_1(k) denote the input port with the highest load among the coflows in S_k, and μ_2(k) denote the output port with the highest load among the coflows in S_k. Hence, we have L_μ_1(k)(S_k)=∑_k'≤ k L_μ_1(k), k' and L_μ_2(k)(S_k)=∑_k'≤ k L_μ_2(k), k'.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_1(k)(S_k)/m.
* For every set S_k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k, r_k>κ· L_μ_1(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_2(k), k, r_k>κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k or a nonzero α_μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that (∑_k∈ S L_i,k)^2≤ 2m· f_i(S) and (∑_k∈ S L_j,k)^2≤ 2m· f_j(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)), where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f L_μ_1(q)(S_q)+L_μ_2(q)(S_q)
≤ ∑_q=1^f L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
= f(L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k +∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k +∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k
≤∑_k=1^n w_k·(a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)))
Let A=a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)). By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n(∑_i ∈ℐα_i, k+∑_k=1^n∑_j ∈𝒥α_j, k)· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐα_i, k· A+∑_k=1^n∑_j ∈𝒥α_j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·A
≤ ∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)(a·r_k+2χ·m·r_k/κ)
≤ (a+2χ·m/κ)∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k ·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_k'≤kr_k'+L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·L_μ_1(k)(S_k)/m + 2χ·L_μ_1(k)(S_k))
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kL_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kL_μ_1(k)(S_k)/m
= (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'L_i(S_k)L_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'(L_μ_1(k)(S_k))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ·m)∑_i ∈ℐ∑_k=1^nβ_i,kf_i(S_μ_1(k),k)
= 2(a·κ+2χ·m)∑_k=1^nβ_μ_1(k),kf_i(S_μ_1(k),k)
≤ 2(a·κ+2χ·m)∑_i ∈ℐ∑_S⊆𝒦β_i,Sf_i(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ·m)∑_j ∈𝒥∑_S⊆𝒦β_j,Sf_j(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m+1 for the coflow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(1+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m+1)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m+1)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m+1)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m+1)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ (4χ· m+1) · OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m for the coflow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ 4χ· m · OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0. If coflow k undergoes the adjustment of the order by setting γ_k',k, then we have ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤R-1/Rw_k. Based on Lemma <ref>, we know that ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
According to lemma <ref>, we have
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m+R for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m for the flow-level scheduling problem without release times.
§ COFLOWS OF MULTI-STAGE JOBS SCHEDULING PROBLEM
In this section, we will focus on addressing the coflows of multi-stage job scheduling problem. We will modify the linear programs (<ref>) by introducing a set 𝒯 to represent the jobs and a set 𝒯_t to represent the coflows that belong to job t. We will also incorporate an additional constraint (<ref>), which will ensure that the completion time of any job is limited by its coflows. Our objective is to minimize the total weighted completion time for a given set of multi-stage jobs. Assuming that all coflows within the same job have the same release time. The resulting problem can be expressed as a linear programming relaxation, which is as follows:
min ∑_t ∈𝒯 w_t C_t <ref>
s.t. (<ref>)-(<ref>)
C_t≥ C_k, ∀ t∈𝒯, ∀ k∈𝒯_t
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_k∈𝒯_t∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_k∈𝒯_t∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_k∈𝒯_t∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_k∈𝒯_t∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_k∈𝒯_t∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_t, ∀ t∈𝒯
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E,
∀ i∈ℐ, ∀ j∈𝒥
Let α_i, j, t = ∑_k∈𝒯_tα_i, j, k, L_i,S,t=∑_k∈𝒯_t L_i,S,k and L_j,S,t=∑_k∈𝒯_t L_j,S,k for all t∈𝒯.
Algorithm <ref> in Appendix <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints. We transmit the jobs sequentially, and within each job, the coflows are transmitted in topological-sorting order. As the values of γ are all zero, similar to the proof of Theorem <ref>, we can obtain the following theorem. Unlike Theorem <ref>, this result is not limited to the workload sizes and weights that are topology-dependent in the input instances.
The proposed algorithm achieves an approximation ratio of O(χ) for minimizing the total weighted completion time of a given set of multi-stage jobs.
§ EXPERIMENTAL RESULTS
In order to evaluate the effectiveness of the proposed algorithm, this section conducts simulations comparing its performance to that of a previous algorithm. Both synthetic and real traffic traces are used for these simulations, without considering release time. The subsequent sections present and analyze the results obtained from these simulations.
§.§ Comparison Metrics
Since the cost of the feasible dual solution provides a lower bound on the optimal value of the coflow scheduling problem, we calculate the approximation ratio by dividing the total weighted completion time achieved by the algorithms by the cost of the feasible dual solution.
§.§ Randomly Generated Graphs
In this section, we examine a collection of randomly generated graphs that are created based on a predefined set of fundamental characteristics.
* DAG size, n: The number of coflows in the DAG.
* Out degree, deg: Out degree of a node.
* Parallelism factor, (p) <cit.>: The calculation of the levels in the DAG involves randomly generating a number from a uniform distribution. The mean value of this distribution is √(n)/p. The generated number is then rounded up to the nearest integer, determining the number of levels. Additionally, the width of each level is calculated by randomly generating a number from a uniform distribution. The mean value for this distribution is p ×√(n), and it is also rounded up to the nearest integer <cit.>. Graphs with a larger value of p tend to have a smaller χ, while those with a smaller value of p have a larger χ.
* Workload, (W_min, W_max, L_min, L_max) <cit.>:
Each coflow is accompanied by a description (W_min, W_max, L_min, L_max) that provides information about its characteristics. To determine the number of non-zero flows within a coflow, two values, w_1 and w_2, are randomly selected from the interval [W_min, W_max]. These values are then assigned to the input and output links of the coflow in a random manner. The size of each flow is randomly chosen from the interval [L_min, L_max]. The construction of all coflows by default follows a predefined distribution based on the coflow descriptions. This distribution consists of four configurations: (1, 4, 1, 10), (1, 4, 10, 1000), (4, N, 1, 10), and (4, N, 10, 1000), with proportions of 41%, 29%, 9%, and 21%, respectively. Here, N represents the number of ports in the core.
Let level_k denote the level of coflow k, and let Lv(k)={k'∈𝒦 | level_k < level_k'} represent the set of coflows that have a higher level than k. When constructing a DAG, only a subset of Lv(k) can be selected as successors for each coflow k. For coflow k, a set of successors is randomly chosen with a probability of deg/|Lv(k)|. To assign weights to each coflow, positive integers are randomly and uniformly selected from the interval [1, 100].
§.§ Results
Figure <ref> illustrates the approximation ratio of the proposed algorithm compared to the previous algorithm for synthetic traces. The problem size ranges from 5 to 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. The proposed algorithms demonstrate significantly smaller approximation ratios than 4χ+2-2/m. Furthermore, FDLS outperforms Weaver by approximately 4.7% to 7.5% within this problem size range. Although there are no restrictions on the workload's load and weights being topology-dependent for each instance, we still obtain results lower than 4χ+2-2/m. This demonstrates the excellent performance of the algorithm in general scenarios.
The effects of flow density were compared by categorizing the coflows into three instances: dense, sparse, and combined. For each instance, the number of flows was randomly selected from either the range [N, N^2] or [1, N], depending on the specific instance. In the combined instance, each coflow has a 50% probability of being set to sparse and a 50% probability of being set to dense. Figure <ref> illustrates the approximation ratio of synthetic traces for 100 randomly chosen dense and combined instances, comparing the previous algorithm with the proposed algorithm. The problem size consisted of 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. In the dense case, Weaver achieved an approximation ratio of 2.80, while FDLS achieved an approximation ratio of 2.66, resulting in a 5.12% improvement with Weaver. In the combined case, FDLS outperformed Weaver by 2.52%. Importantly, the proposed algorithm demonstrated a greater improvement in the dense case compared to the combined case.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying numbers of network cores, comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 to 25 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. Remarkably, the proposed algorithm consistently achieves significantly smaller approximation ratios compared to the theoretical bound of 4χ+2-2/m. As the number of network cores increases, the approximation ratio also tends to increase. This observation can be attributed to the widening gap between the cost of the feasible dual solution and the cost of the optimal integer solution as the number of network cores grows. Consequently, this leads to a notable discrepancy between the experimental approximation ratio and the actual approximation ratio. Importantly, across different numbers of network cores, FDLS outperforms Weaver by approximately 1.79% to 5.30%.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying parallelism factor (p), comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. According to our settings, the coflow number of the longest path in the DAG (χ) exhibits an increasing trend as the parallelism factor p decreases. Correspondingly, the approximation ratio also shows an upward trend with a decrease in the parallelism factor p. This empirical finding aligns with the theoretical analysis, demonstrating a linear relationship between the approximation ratio and χ.
We present the simulation results of the real traffic trace obtained from Hive/MapReduce traces captured from Facebook's 3000-machine cluster, consisting of 150 racks. This real traffic trace has been widely used in previous research simulations <cit.>. The trace dataset comprises a total of 526 coflows. In Figure <ref>, we depict the approximation ratio of the real traces for different thresholds of the number of flows. That is, we apply a filter to the set of coflows based on the condition that the number of flows is equal to or greater than the threshold value. For each instance, we set deg=3, p=1, and χ≥ 2. Notably, the proposed FDLS algorithm outperforms the Weaver algorithm by approximately 4.84% to 3.11% across various thresholds. Furthermore, as the number of flows increases, the approximation ratio decreases. This observation is consistent with our previous findings, suggesting a decreasing trend in the approximation ratio as the number of coflows increases.
§ CONCLUDING REMARKS
This paper focuses on the study the problem of coflow scheduling with release times and precedence constraints in identical parallel networks. The algorithm we propose effectively solves the scheduling order of coflows using the primal-dual method. The primal-dual algorithm has a space complexity of O(Nn) and a time complexity of O(n^2). When considering workload sizes and weights that are topology-dependent in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ). Furthermore, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ). For the coflow-level scheduling problem, the proposed algorithm attains an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Moreover, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
10
url@rmstyle
Agarwal2018
S. Agarwal, S. Rajakrishnan, A. Narayan, R. Agarwal, D. Shmoys, and A. Vahdat,
“Sincronia: Near-optimal network design for coflows,” in Proceedings
of the 2018 ACM Conference on SIGCOMM, ser. SIGCOMM '18.1em plus
0.5em minus 0.4emNew York, NY, USA: Association for Computing
Machinery, 2018, p. 16–29.
ahmadi2020scheduling
S. Ahmadi, S. Khuller, M. Purohit, and S. Yang, “On scheduling coflows,”
Algorithmica, vol. 82, no. 12, pp. 3604–3629, 2020.
al2008scalable
M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center
network architecture,” ACM SIGCOMM computer communication review,
vol. 38, no. 4, pp. 63–74, 2008.
Bansal2010
N. Bansal and S. Khot, “Inapproximability of hypergraph vertex cover and
applications to scheduling problems,” in Automata, Languages and
Programming, S. Abramsky, C. Gavoille, C. Kirchner, F. Meyer auf der Heide,
and P. G. Spirakis, Eds.1em plus 0.5em minus 0.4emBerlin,
Heidelberg: Springer Berlin Heidelberg, 2010, pp. 250–261.
borthakur2007hadoop
D. Borthakur, “The hadoop distributed file system: Architecture and design,”
Hadoop Project Website, vol. 11, no. 2007, p. 21, 2007.
Chowdhury2012
M. Chowdhury and I. Stoica, “Coflow: A networking abstraction for cluster
applications,” in Proceedings of the 11th ACM Workshop on Hot Topics
in Networks, ser. HotNets-XI.1em plus 0.5em minus 0.4emNew
York, NY, USA: Association for Computing Machinery, 2012, p. 31–36.
Chowdhury2015
——, “Efficient coflow scheduling without prior knowledge,” in
Proceedings of the 2015 ACM Conference on SIGCOMM, ser. SIGCOMM
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 393–406.
chowdhury2011managing
M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica, “Managing data
transfers in computer clusters with orchestra,” ACM SIGCOMM computer
communication review, vol. 41, no. 4, pp. 98–109, 2011.
Chowdhury2014
M. Chowdhury, Y. Zhong, and I. Stoica, “Efficient coflow scheduling with
varys,” in Proceedings of the 2014 ACM Conference on SIGCOMM, ser.
SIGCOMM '14.1em plus 0.5em minus 0.4emNew York, NY, USA:
Association for Computing Machinery, 2014, p. 443–454.
Daoud08
M. I. Daoud and N. Kharma, “A high performance algorithm for static task
scheduling in heterogeneous distributed computing systems,” Journal of
Parallel and Distributed Computing, vol. 68, no. 4, pp. 399 – 409, 2008.
DAVIS2013121
J. M. Davis, R. Gandhi, and V. H. Kothari, “Combinatorial algorithms for
minimizing the weighted sum of completion times on a single machine,”
Operations Research Letters, vol. 41, no. 2, pp. 121–125, 2013.
Dean2008
J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large
clusters,” Communications of the ACM, vol. 51, no. 1, p. 107–113,
jan 2008.
dogar2014decentralized
F. R. Dogar, T. Karagiannis, H. Ballani, and A. Rowstron, “Decentralized
task-aware scheduling for data center networks,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 4, pp. 431–442, 2014.
greenberg2009vl2
A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A.
Maltz, P. Patel, and S. Sengupta, “Vl2: A scalable and flexible data center
network,” in Proceedings of the ACM SIGCOMM 2009 conference on Data
communication, 2009, pp. 51–62.
huang2016
X. S. Huang, X. S. Sun, and T. E. Ng, “Sunflow: Efficient optical circuit
scheduling for coflows,” in Proceedings of the 12th International on
Conference on emerging Networking EXperiments and Technologies, 2016, pp.
297–311.
Huang2020
X. S. Huang, Y. Xia, and T. S. E. Ng, “Weaver: Efficient coflow scheduling in
heterogeneous parallel networks,” in 2020 IEEE International Parallel
and Distributed Processing Symposium (IPDPS), 2020, pp. 1071–1081.
isard2007dryad
M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly, “Dryad: distributed
data-parallel programs from sequential building blocks,” in
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on
Computer Systems 2007, 2007, pp. 59–72.
khuller2016brief
S. Khuller and M. Purohit, “Brief announcement: Improved approximation
algorithms for scheduling co-flows,” in Proceedings of the 28th ACM
Symposium on Parallelism in Algorithms and Architectures, 2016, pp.
239–240.
Qiu2015
Z. Qiu, C. Stein, and Y. Zhong, “Minimizing the total weighted completion time
of coflows in datacenter networks,” in Proceedings of the 27th ACM
Symposium on Parallelism in Algorithms and Architectures, ser. SPAA
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 294–303.
Sachdeva2013
S. Sachdeva and R. Saket, “Optimal inapproximability for scheduling problems
via structural hardness for hypergraph vertex cover,” in 2013 IEEE
Conference on Computational Complexity, 2013, pp. 219–229.
shafiee2018improved
M. Shafiee and J. Ghaderi, “An improved bound for minimizing the total
weighted completion time of coflows in datacenters,” IEEE/ACM
Transactions on Networking, vol. 26, no. 4, pp. 1674–1687, 2018.
shafiee2021scheduling
——, “Scheduling coflows with dependency graph,” IEEE/ACM
Transactions on Networking, 2021.
Shvachko2010
K. Shvachko, H. Kuang, S. Radia, and R. Chansler, “The hadoop distributed file
system,” in 2010 IEEE 26th Symposium on Mass Storage Systems and
Technologies (MSST), 2010, pp. 1–10.
Singh2015
A. Singh, J. Ong, A. Agarwal, G. Anderson, A. Armistead, R. Bannon, S. Boving,
G. Desai, B. Felderman, P. Germano, A. Kanagala, J. Provost, J. Simmons,
E. Tanda, J. Wanderer, U. Hölzle, S. Stuart, and A. Vahdat, “Jupiter
rising: A decade of clos topologies and centralized control in google's
datacenter network,” in Proceedings of the 2015ACM Conference on
SIGCOMM, ser. SIGCOMM '15.1em plus 0.5em minus 0.4emNew York,
NY, USA: Association for Computing Machinery, 2015, p. 183–197.
Tian18
B. Tian, C. Tian, H. Dai, and B. Wang, “Scheduling coflows of multi-stage jobs
to minimize the total weighted job completion time,” in IEEE INFOCOM
2018 - IEEE Conference on Computer Communications, 2018, pp. 864–872.
Topcuoglu02
H. Topcuoglu, S. Hariri, and M.-Y. Wu, “Performance-effective and
low-complexity task scheduling for heterogeneous computing,” IEEE
Transactions on Parallel and Distributed Systems, vol. 13, no. 3, pp.
260–274, Mar 2002.
zaharia2010spark
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark:
Cluster computing with working sets,” in 2nd USENIX Workshop on Hot
Topics in Cloud Computing (HotCloud 10), 2010.
Zhang2016
H. Zhang, L. Chen, B. Yi, K. Chen, M. Chowdhury, and Y. Geng, “Coda: Toward
automatically identifying and scheduling coflows in the dark,” in
Proceedings of the 2016 ACM Conference on SIGCOMM, ser. SIGCOMM
'16.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2016, p. 160–173.
zhao2015rapier
Y. Zhao, K. Chen, W. Bai, M. Yu, C. Tian, Y. Geng, Y. Zhang, D. Li, and
S. Wang, “Rapier: Integrating routing and scheduling for coflow-aware data
center networks,” in 2015 IEEE Conference on Computer Communications
(INFOCOM).1em plus 0.5em minus 0.4emIEEE, 2015, pp. 424–432.
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
The primal-dual algorithm, presented in Algorithm <ref>, draws inspiration from the works of Davis et al. <cit.> and Ahmadi et al. <cit.>. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints.
Permuting Jobs
|
http://arxiv.org/abs/2307.05254v1 | 20230711133607 | OpenAL: An Efficient Deep Active Learning Framework for Open-Set Pathology Image Classification | [
"Linhao Qu",
"Yingfan Ma",
"Zhiwei Yang",
"Manning Wang",
"Zhijian Song"
] | cs.CV | [
"cs.CV"
] |
OpenAL
L. Qu et al.
Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention , Shanghai 200032, China Academy for Engineering & Technology, Fudan University, Shanghai 200433, China.
{lhqu20, mnwang, zjsong}@fudan.edu.cn
^*Linhao Qu and Yingfan Ma contributed equally.
OpenAL: An Efficient Deep Active Learning Framework for Open-Set Pathology Image Classification
Linhao Qu^*1,2 Yingfan Ma^*1,2 Zhiwei Yang2,3 Manning Wang^()1,2 Zhijian Song^()1,2
August 12, 2023
===============================================================================================
Active learning (AL) is an effective approach to select the most informative samples to label so as to reduce the annotation cost.
Existing AL methods typically work under the closed-set assumption, i.e., all classes existing in the unlabeled sample pool need to be classified by the target model.
However, in some practical clinical tasks, the unlabeled pool may contain not only the target classes that need to be fine-grainedly classified, but also non-target classes
that are irrelevant to the clinical tasks. Existing AL methods cannot work well in this scenario because they tend to select a large number of non-target samples. In this paper,
we formulate this scenario as an open-set AL problem and propose an efficient framework, OpenAL, to address the challenge of querying samples from an unlabeled pool with both
target class and non-target class samples. Experiments on fine-grained classification of pathology images show that OpenAL can significantly improve the query quality of target
class samples and achieve higher performance than current state-of-the-art AL methods. Code is available at https://github.com/miccaiif/OpenAL.
§ INTRODUCTION
Deep learning techniques have achieved unprecedented success in the field of medical image classification, but this is largely due to large amount of annotated data <cit.>.
However, obtaining large amounts of high-quality annotated data is usually expensive and time-consuming, especially in the field of pathology image processing <cit.>. Therefore,
a very important issue is how to obtain the highest model performance with a limited annotation budget.
Active learning (AL) is an effective approach to address this issue from a data selection perspective, which selects the most informative samples from an unlabeled sample pool for
experts to label and improves the performance of the trained model with reduced labeling cost <cit.>. However, existing AL methods usually work under the closed-set assumption, i.e.,
all classes existing in the unlabeled sample pool need to be classified by the target model, which does not meet the needs of some real-world scenarios <cit.>. Fig. <ref> shows an AL scenario
for pathology image classification in an open world, which is very common in clinical practice. In this scenario, the Whole Slide Images (WSIs) are cut into many small patches that compose
the unlabeled sample pool, where each patch may belong to tumor, lymph, normal tissue, fat, stroma, debris, background, and many other categories. However, it is not necessary to perform
fine-grained annotation and classification for all categories in clinical applications. For example, in the cell classification task, only patches of tumor, lymphatic and normal cells need
to be labeled and classified by the target model. Since the non-target patches are not necessary for training the classifier, labeling them would waste a large amount of budget. We call
this scenario in which the unlabeled pool consists of both target class and non-target class samples open-set AL problem. Most existing AL algorithms can only work in the closed-set setting.
Even worse, in the open-set setting, they even query more non-target samples because these samples tend to have greater uncertainty compared to the target class samples <cit.>. Therefore,
for real-world open-set pathology image classification scenarios, an AL method that can accurately query the most informative samples from the target classes is urgently needed.
Recently, Ning et al. <cit.> proposed the first AL algorithm for open-set annotation in the field of natural images. They first trained a network to detect target class samples using a small number of
initially labeled samples, and then modeled the maximum activation value (MAV) distribution of each sample using a Gaussian mixture model <cit.> (GMM) to actively select the most deterministic target
class samples for labeling. Although promising performance is achieved, their detection of target class samples is based on the activation layer values of the detection network which has limited accuracy
and high uncertainty with small initial training samples.
In this paper, we propose a novel AL framework under an open-set scenario, and denote it as OpenAL, which cannot only query as many target class samples as possible but also query the most informative samples
from the target classes. OpenAL adopts an iterative query paradigm and uses a two-stage sample selection strategy in each query. In the first stage, we do not rely on a detection network to select target
class samples and instead, we propose a feature-based target sample selection strategy. Specifically, we first train a feature extractor using all samples in a self-supervised learning manner, and map all
samples to the feature space. There are three types of samples in the feature space, the unlabeled samples, the target class samples labeled in previous iterations, and the non-target class samples queried in
previous iterations but not being labeled. Then we select the unlabeled samples that are close to the target class samples and far from the non-target class samples to form a candidate set. In the second
stage, we select the most informative samples from the candidate set by utilizing a model-based informative sample selection strategy. In this stage, we measure the uncertainty of all unlabeled samples in
the candidate set using the classifier trained with the target class samples labeled in previous iterations, and select the samples with the highest model uncertainty as the final selected samples in this
round of query. After the second stage, the queried samples are sent for annotation, which includes distinguishing target and non-target class samples and giving a fine-grained label to every target
class sample. After that, we train the classifier again using all the fine-grained labeled target class samples.
We conducted two experiments with different matching ratios (ratio of the number of target
class samples to the total number of samples) on a public 9-class colorectal cancer pathology image dataset. The experimental results demonstrate that OpenAL can significantly improve the query quality of
target class samples and obtain higher performance with equivalent labeling cost compared with the current state-of-the-art AL methods. To the best of our knowledge, this is the first open-set AL work in
the field of pathology image analysis.
§ METHOD
We consider the AL task for pathology image classification in an open-set scenario. The unlabeled sample pool P_U consists of K classes of target samples and L classes of non-target samples
(usually, K<L). N iterative queries are performed to query a fixed number of samples in each iteration, and the objective is to select as many target class samples as possible from P_U in each query,
while selecting as many informative samples as possible in the target class samples. Each queried sample is given to experts for labeling, and the experts will give fine-grained category labels for
target class samples, while only giving a "non-target class samples" label for non-target class samples.
§.§ Framework Overview
Fig. <ref> illustrates the workflow of the proposed method, OpenAL. OpenAL performs a total of N iterative queries, and each query is divided into two stages. In Stage 1, OpenAL uses a feature-based target
sample selection (FTSS) strategy to query the target class samples from the unlabeled sample pool to form a candidate set. Specifically, we first train a feature extractor with all samples by self-supervised
learning, and map all samples to the feature space. Then we model the distribution of all unlabeled samples, all labeled target class samples from previous iterations, and all non-target class samples
queried in previous iterations in the feature space, and select the unlabeled samples that are close to the target class samples and far from the non-target class samples. In Stage 2, OpenAL adopts a
model-based informative sample selection (MISS) strategy. Specifically, we measure the uncertainty of all unlabeled samples in the candidate set using the classifier trained in the last iteration, and select
the samples with the highest model uncertainty as the final selected samples, which are sent to experts for annotation. After obtaining new labeled samples, we train the classifier using all fine-grained
labeled target class samples with cross-entropy as the loss function. The FTSS strategy is described in Section <ref>, and the MISS strategy is described in Section <ref>.
§.§ Feature-based Target Sample Selection
Self-supervised Feature Representation. First, we use all samples to train a feature extractor by self-supervised learning and map all samples to the latent feature space. Here, we adopt DINO <cit.> as the self-supervised network because of its outstanding performance.
Sample Scoring and Selection in the Feature Space. Then we define a scoring function on the base of the distribution of unlabeled samples, labeled target class samples and non-target class samples queried in previous iterations. Every unlabeled sample in the current iteration is given a score, and a smaller score indicates that the sample is more likely to come from the target classes. The scoring function is defined in Equation <ref>.
s_i=s_t_i-s_w_i
where s_i denotes the score of the unlabeled sample x_i^U . s_t_i measures the distance between x_i^U and the distribution of features derived from all the labeled target class samples.
The smaller s_t_i is, the closer x_i^U is to the known sample distribution of the target classes, and the more likely x_i^U is from a target class. Similarly, s_w_i measures the distance
between x_i^U and the distribution of features derived from all the queried non-target class samples. The smaller s_w_i is, the closer x_i^U is from the known distribution of non-target class
samples, and the less likely x_i^U is from the target class. After scoring all the unlabeled samples, we select the top ε% samples with the smallest scores to form the candidate set.
In this paper, we empirically take twice the current iterative labeling budget (number of samples submitted to experts for labeling) as the sample number of the candidate set. Below, we give
the definitions of s_t_i and s_w_i.
Distance-based Feature Distribution Modeling. We propose a category and Mahalanobis distance-based feature distribution modeling approach for calculating s_t_i and s_w_i.
The definitions of these two values are slightly different, and we first present the calculation of s_t_i, followed by that of s_w_i.
For all labeled target class samples from previous iterations, their fine-grained labels are known, so we represent these samples as different clusters in the feature space according to their true class
labels, where a cluster is denoted as C_t^L (t=1, …, K). Next, we calculate the score s_t_i for z_i^U using the Mahalanobis distance (MD) according to Equation <ref>.
MD is widely used to measure the distance between a point and a distribution because it takes into account the mean and variance of the distribution, which is very suitable for our scenario.
s_t_i =Nom(min _t(D(z_i^U, C_t^L)))=Nom(min _t(z_i^U-μ_t)^T Σ_t^-1(z_i^U-μ_t))
Nom(X)=X-X_min/X_max-X_min
where D(·) denotes the MD function, μ_t and Σ_t are the mean and covariance of the samples in the target class t, and Nom(·) is the normalization function.
It can be seen that s_t_i is essentially the minimum distance of the unlabeled sample x_i^U to each target class cluster.
For all the queried non-target class samples from previous iterations, since they do not have fine-grained labels, we first use the K-means algorithm to cluster their features into w classes, where a cluster is denoted as C_w^L (w=1, …,W). W is set to 9 in this paper. Next, we calculate the score s_w_i for z_i^U using the MD according to Equation <ref>.
s_w_i=Nom(min _w(D(z_i^U, C_w^L)))=Nom(min _w(z_i^U-μ_w)^T Σ_t^-1(z_i^U-μ_w))
where μ_w and Σ_w are the mean and covariance of the non-target class sample features in the wth cluster. It can be seen that s_w_i is essentially
the minimum distance of z_i^U to each cluster of known non-target class samples.
The within-cluster selection and dynamic cluster changes between rounds significantly enhance the diversity of the selected samples and reduce redundancy.
§.§ Model-based Informative Sample Selection
To select the most informative samples from the candidate set, we utilize the model-based informative sample selection strategy in Stage 2. We measure the uncertainty of
all unlabeled samples in the candidate set using the classifier trained in the last iteration and select the samples with the highest model uncertainty as the final selected samples. The entropy
of the model output is a simple and effective way to measure sample uncertainty <cit.>. Therefore, we calculate the entropy of the model for the samples in the candidate set and select
50% of them with the highest entropy as the final samples in the current iteration.
§ EXPERIMENTS
§.§ Dataset, Settings, Metrics and Competitors
To validate the effectiveness of OpenAL, we conducted two experiments with different matching ratios (the ratio of the number of samples in the target class to the total number of samples)
on a 9-class public colorectal cancer pathology image classification dataset (NCT-CRC-HE-100K) <cit.>. The dataset contains a total of 100,000 patches of pathology images with fine-grained labeling,
with nine categories including Adipose (ADI 10%), background (BACK 11%), debris (DEB 11%), lymphocytes (LYM 12%), mucus (MUC 9%), smooth muscle (MUS 14%), normal colon mucosa (NORM 9%),
cancer-associated stroma (STR 10%), and colorectal adenocarcinoma epithelium (TUM, 14%). To construct the open-set datasets, we selected three classes, TUM, LYM and NORM, as the target
classes and the remaining classes as the non-target classes. We selected these target classes to simulate a possible scenario for pathological cell classification in clinical practice. Technically, target classes can be randomly chosen.
In the two experiments, we set the matching ratio to 33% (3 target classes, 6 non-target classes), and 42% (3 target classes, 4
non-target classes), respectively.
Metrics. Following <cit.>, we use three metrics, precision, recall and accuracy to compare the performance of each AL method. We use precision and recall to measure the performance of
different methods in target class sample selection. As defined in Equation <ref>, precision is the proportion of the target class samples among the total samples queried in each query and
recall is the ratio of the number of the queried target class samples to the number of all the target class samples in the unlabeled sample pool.
precision_m=k_m/k_m+l_m, recall_m=∑_j=0^m k_m/n_target
where k_m denotes the number of target class samples queried in the mth query, l_m denotes the number of non-target class samples queried in the mth query, and n_target denotes the
number of target class samples in the original unlabeled sample pool. Obviously, the higher the precision and recall are, the more target class samples are queried, and the more effective the
trained target class classifier will be. We measure the final performance of each AL method using the accuracy of the final classifier on the test set of target class samples.
Competitors. We compare the proposed OpenAL to random sampling and five AL methods, LfOSA <cit.>, Uncertainty <cit.>, Certainty <cit.>, Coreset <cit.> and RA <cit.>, of
which only LfOSA <cit.> is designed for open-set AL. For all AL methods, we randomly selected 1% of the samples to label and used them as the initial labeled set for model initialization. It is
worth noting that the initial labeled samples contain target class samples as well as non-target class samples, but the non-target class samples are not fine-grained labeled. After each query round,
we train a ResNet18 model of 100 epochs, using SGD as the optimizer with momentum of 0.9, weight decay of 5e-4, initial learning rate of 0.01, and batchsize of 128. The annotation budget for each
query is 5% of all samples, and the length of the candidate set is twice the budget for each query. For each method, we ran four experiments and recorded the average results for four randomly selected seeds.
§.§ Performance Comparison
Fig. <ref> A and B show the precision, recall and model accuracy of all comparing methods at 33% and 42% matching ratios, respectively. It can be seen that OpenAL outperforms the other
methods in almost all metrics and all query numbers regardless of the matching ratio. Particularly, OpenAL significantly outperforms LfOSA <cit.>, which is specifically designed for open-set AL.
The inferior performance of the AL methods based on the closed-set assumption is due to the fact that they are unable to accurately identify more target class samples, thus wasting a large amount of
annotation budget. Although LfOSA <cit.> utilizes a dedicated network for target class sample detection, the performance of the detection network is not stable when the number of training samples is
small, thus limiting its performance. In contrast, our method uses a novel feature-based target sample selection strategy and achieves the best performance.
Upon analysis, our OpenAL is capable of effectively maintaining the balance of sample numbers across different classes during active learning.
We visualize the cumulative sampling ratios of OpenAL for the target classes in each round on the original dataset with a 33% matching ratio, as shown in Fig. <ref> A.
Additionally, we visualize the cumulative sampling ratios of the LfOSA method on the same setting in Fig. <ref> B. It can be observed that in the first 4 rounds,
LYM samples are either not selected or selected very few times. This severe sample imbalance weakens the performance of LfOSA compared to random selection initially.
Conversely, our method selects target class samples with a more balanced distribution. Furthermore, we constructed a more imbalanced setting for the target classes LYM
(6000 samples), NORM (3000 samples), and TUM (9000 samples), yet the cumulative sampling ratios of our method for these three target classes remain fairly balanced, as shown in Fig. <ref> C.
§.§ Ablation Study
To further validate the effectiveness of each component of OpenAL, we conducted an ablation test at a matching ratio of 33%. Fig. <ref> C shows the results,
where w/o s_w indicates that the distance score of non-target class samples is not used in the scoring of Feature-based Target Sample Selection (FTSS), w/o s_t indicates that the
distance score of target class samples is not used, w/o MISS means no Model-based Informative Sample Selection is used, i.e., the length of the candidate set is directly set to the
annotation budget in each query, and only MISS means no FTSS strategy is used, but only uncertainty is used to select samples.
It can be seen that the distance modeling of both the target class samples and the non-target class samples is essential in the FTSS strategy, and missing either one results in a
decrease in performance. Although the MISS strategy does not significantly facilitate the selection of target class samples, it can effectively help select the most informative samples
among the samples in the candidate set, thus further improving the model performance with a limited labeling budget. In contrast, when the samples are selected based on uncertainty alone,
the performance decreases significantly due to the inability to accurately select the target class samples. The above experiments demonstrate the effectiveness of each component of OpenAL.
§ CONCLUSION
In this paper, we present a new open-set scenario of active learning for pathology image classification, which is more practical in real-world applications.
We propose a novel AL framework for this open-set scenario, OpenAL, which addresses the challenge of accurately querying the most informative target class samples
in an unlabeled sample pool containing a large number of non-target samples. OpenAL significantly outperforms state-of-the-art AL methods on real pathology image classification tasks.
More importantly, in clinical applications, on one hand, OpenAL can be used to query informative target class samples for experts to label, thus enabling better training of
target class classifiers under limited budgets. On the other hand, when applying the classifier for future testing, it is also possible to use the feature-based target sample
selection strategy in the OpenAL framework to achieve an open-set classifier. Therefore, this framework can be applied to both datasets containing only target class samples and
datasets also containing a large number of non-target class samples during testing.
§ ACKNOWLEDGMENTS
This work was supported by National Natural Science Foundation of China under Grant 82072021.
splncs04
|
http://arxiv.org/abs/2307.04871v1 | 20230710194152 | LSEMINK: A Modified Newton-Krylov Method for Log-Sum-Exp Minimization | [
"Kelvin Kan",
"James G. Nagy",
"Lars Ruthotto"
] | math.OC | [
"math.OC"
] |
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis
Bodong Chen
August 12, 2023
====================================================================================================
[2]Department of Mathematics, Emory University, USA ([email protected])
[3]Departments of Mathematics and Computer Science, Emory University, USA ([email protected], [email protected])
This paper introduces LSEMINK, an effective modified Newton-Krylov algorithm geared toward minimizing the log-sum-exp function for a linear model.
Problems of this kind arise commonly, for example, in geometric programming and multinomial logistic regression.
Although the log-sum-exp function is smooth and convex, standard line search Newton-type methods can become inefficient because the quadratic approximation of the objective function can be unbounded from below.
To circumvent this, LSEMINK modifies the Hessian by adding a shift in the row space of the linear model. We show that the shift renders the quadratic approximation to be bounded from below and that the overall scheme converges to a global minimizer under mild assumptions.
Our convergence proof also shows that all iterates are in the row space of the linear model, which can be attractive when the model parameters do not have an intuitive meaning, as is common in machine learning.
Since LSEMINK uses a Krylov subspace method to compute the search direction, it only requires matrix-vector products with the linear model, which is critical for large-scale problems.
Our numerical experiments on image classification and geometric programming illustrate that LSEMINK considerably reduces the time-to-solution and increases the scalability compared to geometric programming and natural gradient descent approaches. It has significantly faster initial convergence than standard Newton-Krylov methods, which is particularly attractive in applications like machine learning. In addition, LSEMINK is more robust to ill-conditioning arising from the nonsmoothness of the problem. We share our MATLAB implementation at <https://github.com/KelvinKan/LSEMINK>.
log-sum-exp minimization, Newton-Krylov method, modified Newton method, machine learning, geometric programming
65K10
§ INTRODUCTION
We consider minimization problems of the form
min_∈^n f() = ∑_k=1^N w^(k)[ g^(k)() - ^(k)^⊤^(k)],
where
g^(k)():=log( 1_m^⊤exp(^(k) + ^(k)) )
is the log-sum-exp function for a linear model defined by ^(k)∈ℝ^m × n and ^(k)∈ℝ^m, ^(k)∈ℝ^m, 1_m ∈ℝ^m is a vector of all ones, w^(k)'s are weights, and N is the number of linear models.
Problem (<ref>) arises commonly in machine learning and optimization. For example, multinomial logistic regression (MLR) in classification problems <cit.> is formulated as (<ref>). In geometric programming <cit.>, a non-convex problem can be convexified through a reformulation to the form (<ref>). The log-sum-exp function itself also has extensive applications in machine learning. For instance, it can serve as a smooth approximation to the element-wise maximum function <cit.>, where smoothness is desirable in model design since gradient-based optimizers are commonly used. Moreover, the log-sum-exp function is closely related to widely used softmax and entropy functions. For instance, the dual to an entropy maximization problem is a log-sum-exp minimization problem <cit.>, and the gradient of the log-sum-exp function is the softmax function <cit.>.
Despite the smoothness and convexity of the log-sum-exp function, a standard implementation of line search Newton-type methods can be problematic. To realize this, note that the gradient and Hessian of the log-sum-exp function are given by
∇ f() = ∑_k=1^N w^(k)^(k)^⊤ (^(k) - ^(k)), and ∇^2 f() = ∑_k=1^N w^(k)^(k)^⊤^(k)^(k),
with ^(k) = exp(^(k) + ^(k))/ 1_m^⊤exp(^(k) + ^(k)), and ^(k) = diag(^(k)) - ^(k)^(k)^⊤.
The Hessian is positive semi-definite and rank-deficient because the null space of the ^(k)'s contains 1_m. Even more problematic is that when ^(k)'s are close to a standard basis vector (which, for example, commonly occurs in MLR), the Hessian is close to the zero matrix even when the gradient is non-zero. In Newton's method, this means that the local quadratic approximation can be unbounded from below. To be precise, it is unbounded from below if and only if the gradient is not in the column space of the Hessian <cit.>.
Disciplined convex programming (DCP) packages (e.g., CVX <cit.>) can reliably solve the log-sum-exp minimization problem through a reformulation. For instance, CVX first formulates the problem using exponential cones <cit.> and applies backend solvers to solve the resulting problem directly (e.g., MOSEK <cit.>) or through successive polynomial approximation (e.g., SPDT3 <cit.> and SeDuMi <cit.>). However, this approach can be computationally demanding as the number of conic constraints scales with the product of the number of rows in the linear models and the number of linear models. For instance, CVX did not complete the image classification experiments for the whole dataset in <Ref> on a standard laptop in thirty minutes, while LSEMINK finishes on the same hardware in thirty seconds. Furthermore, the formulation relies on access to the elements of the ^(k)'s; i.e., this approach is not applicable in a matrix-free setting where ^(k)'s are not built explicitly, and only routines for performing matrix-vector products are provided.
Tikhonov regularization <cit.>, which adds α/2_2^2 with α>0 to the objective function, avoids the cost of reformulation and alleviates the convergence issues with Newton-type methods.
The regularization shifts the Hessian by α and renders it positive definite, where is the identity matrix. Nonetheless, Tikhonov regularization introduces a bias and consequently changes the optimal solution. The regularization parameter α has to be chosen judiciously – a large α renders the problem easier to solve and produces a more regular solution but introduces more bias. In addition, one cannot use effective parameter selection algorithms <cit.> for linear problems due to the nonlinearity of the log-sum-exp function. On the other hand, first-order methods like gradient descent <cit.>, or AdaGrad <cit.>, which do not use the Hessian matrix, can avoid the problem. However, their convergence is inferior to methods that utilize curvature information <cit.>.
Modified Newton-type methods effectively tackle problems with rank-deficient or indefinite Hessians and do not introduce bias. The idea is to add a shift to the Hessian so that at the ith iteration, the scheme solves
min_1/2 ( - _i)^⊤ (∇^2 f(_i) + β_i _i) ( - _i) + ∇ f (_i)^⊤ ( - _i),
where β_i is a parameter, and the shift _i renders the Hessian to be sufficiently positive definite. The quadratic approximation is bounded from below since the modified Hessian is positive definite. Hence the convergence issues are avoided. The effect of the Hessian shift is reminiscent of the Tikhonov regularization approach. Indeed, the scheme is sometimes called a Tikhonov-regularized Newton update <cit.>. However, the key conceptual difference between (<ref>) and Tikhonov regularization is that the former does not introduce any bias to the problem <cit.>, i.e., the optimal solution to the problem is independent of β_i's. There are different ways of defining _i. For instance, _i is spanned by some of the eigenvectors of the Hessian <cit.>, or is a modification to the factorization of the Hessian <cit.>. However, the computations needed for these approaches are intractable for large-scale problems commonly arising in machine learning. A simple and computationally feasible approach is to set _i as the identity matrix <cit.>, which will be used as a comparing method in our numerical experiments.
In this paper, we propose LSEMINK, a novel modified Newton-Krylov method that circumvents the drawbacks outlined above. The main novelty in our method is the Hessian shift _i=∑_k=1^N w^(k)^(k)^⊤^(k). This generates an update in the row space of the linear model, as compared to the aforementioned modified Newton-type methods, which returns an update in the parameter space of the linear model (i.e., the space). This property is preferable in machine learning applications since model parameters often do not have an intuitive meaning, while the row space of the linear model contains interpretable data features. Note that standard convergence guarantees (e.g., <cit.>), which often require positive definiteness of the modified Hessian, do not apply to our method since our modified Hessian can be rank-deficient. We show that the quadratic approximation is bounded from below, and the overall scheme provably converges to a global minimum. Since a Krylov subspace method is applied to approximately solve (<ref>) to obtain the next iterate, LSEMINK is suitable for large-scale problems where the linear models are expensive to build and are only available through matrix-vector multiplications. Our numerical experiments on image classification and geometric programming illustrate that LSEMINK considerably reduces the time-to-solution and increases the scalability compared to DCP and natural gradient descent and has significantly faster initial convergence than standard Newton-Krylov methods.
This paper is organized as follows. In <Ref>, we describe the proposed LSEMINK. In <Ref>, we provide a global convergence guarantee. In <Ref>, we demonstrate the effectiveness of LSEMINK using two numerical experiments motivated by geometric programming and image classification, respectively. We finally conclude the paper in <Ref>.
§ LSEMINK
We propose LSEMINK, a modified Newton-Krylov method geared toward log-sum-exp minimization problems of the form (<ref>).
At the ith iteration, we first consider the quadratic approximation (<ref>) with _i=∑_k=1^N w^(k)^(k)^⊤^(k). That is,
min_ q_i() = 1/2 ( - _i)^⊤(∇^2 f(_i) + β_i ∑_k=1^N w^(k)^(k)^⊤^(k)) ( - _i) + ∇ f (_i)^⊤ ( - _i)
= 1/2
( - _i)^⊤[ ∑_k=1^N w^(k)( ^(k)^⊤ (^(k)_i + β_i ) ^(k)) ] ( - _i) + ∇ f (_i)^⊤ ( - _i),
whose minimizer is given by _i + Δ_i, where Δ_i solves the Newton equation
∇^2 q_i(_i) Δ_i = - ∇ q_i (_i),
and ^(k)_i is ^(k) evaluated at _i. It is important to note that the Hessian shift in (<ref>) is different from the typical modified Newton approaches (e.g. eigenvalue modification <cit.>, identity matrix <cit.>, or modification to the factorization of the Hessian <cit.>) which seek to obtain a positive definite Hessian and lead to an update in the parameter space of the linear model (i.e. the space).
Instead, it generates an update direction in the row space of the linear models. This is preferable especially in machine learning applications because model parameters often do not have intuitive meaning while the row space of the linear models contains data features and is explicable. Although the Hessian of (<ref>) is rank-deficient especially when the linear models are over-parametrized (i.e. ^(k)'s have more columns than rows), it is positive definite in the row space of the linear model. Consequently, the quadratic approximation is bounded from below, and the overall scheme provably converges to a global minimum; see <Ref> for a detailed derivation.
An alternative formulation for (<ref>) is
min_1/2 ( - _i)^⊤∇^2 f(_i) ( - _i) + ∇ f (_i)^⊤ ( - _i) + β_i/2∑_k=1^N w^(k)^(k) ( - _i)_2^2,
which can be interpreted as a Newton scheme with a proximal term acting on the row space of ^(k)'s. This formulation shows that β_i controls the step size in a nonlinear line search arc. To be precise, β_i=0 and ∞ correspond to a Newton update with step size 1 and 0, respectively, and the update is given nonlinearly for 0<β_i<∞. The formulation also shows that our proposed scheme bears similarity to L^2 natural gradient descent (NGD) methods <cit.> which use the same proximal term. Nonetheless, unlike our approach, L^2 NGD methods generally do not directly incorporate Hessian information into its search direction and approximate curvature information using only the linear model.
The crucial difference between the proximal term and Tikhonov regularization is that the former does not introduce any bias <cit.>; i.e., the optimal solution is independent of β_i. Another advantage is that Tikhonov regularization requires parameter tuning, which is commonly done using a grid search for nonlinear problems like (<ref>), while in our proposed method β_i's are automatically selected by a backtracking Armijo line search scheme. The proposed scheme can also be perceived as a proximal point algorithm acting on the second-order approximation <cit.>.
We compute the update direction Δ_i by approximately solving the Newton equation (<ref>) using a Krylov subspace method (e.g., conjugate gradient method <cit.>) and obtain the next iterate _i+1 = _i + Δ_i. In particular, the Krylov subspace is given by
𝒦_r(∇^2 q_i(_i), ∇ q_i (_i))
= 𝒦_r (∑_k=1^N w^(k)( ^(k)^⊤ (^(k)_i + β_i ) ^(k)), ∑_k=1^N w^(k)^(k)^⊤ (_i^(k) - ^(k)) ),
where r is the dimension of the Krylov subspace and _i^(k) is ^(k) evaluated at _i.
Since the Krylov subspace method only requires routines to perform Hessian-vector multiplications, LSEMINK is applicable to large-scale problems commonly arising in machine learning applications where the linear models are only available through matrix-vector products.
An outline of the implementation of LSEMINK is presented in <Ref>.
LSEMINK has significantly faster initial convergence compared with standard Newton-Krylov solvers. This is particularly attractive in applications that do not require high accuracy, e.g., image classification. LSEMINK also considerably reduces the time-to-solution and has better scalability compared to geometric programming and natural gradient descent approaches. It avoids the respective drawbacks of the solvers outlined in <Ref>. Moreover, it is more robust to ill-conditioning arising from the nonsmoothness of the problem; see <Ref> for numerical experiments. We provide a MATLAB implementation at <https://github.com/KelvinKan/LSEMINK>. The implementation is easy to experiment with, as it only requires minimal knowledge and input from the user.
§ PROOF OF GLOBAL CONVERGENCE
In this section, we prove the global convergence of the proposed LSEMINK. It is noteworthy that existing convergence results cannot be directly applied due to the rank-deficiency of our modified Hessian. For instance, it is assumed in <cit.> that the modified Hessian is positive definite and has a bounded condition number. Our proof is modified from the approach in <cit.>, which studies proximal Newton-type methods for composite functions.
We first state the main theorem.
Assume that f is defined in (<ref>),
and inf_ f() is attained in ℝ, then the sequence {_i }_i generated by LSEMINK converges to a global minimum regardless of the choice of initial guess _0.
We note that <Ref> also applies to the case where the Newton equation (<ref>) is solved exactly. In the following, we will first discuss some properties of LSEMINK. We will then state and prove four lemmas which will aid the proof of <Ref>.
For simplicity of exposition and without loss of generality, in this section, we drop the superscript and focus on the case with only one linear model defined by , , and , and the weight w=1. In this case, the Krylov subspace in (<ref>) becomes
𝒦_r(∇^2 q_i(_i), ∇ q_i (_i)) = 𝒦_r(^⊤ (_i + β_i ) , ^⊤ (_i - )).
We note that our proof can be straightforwardly extended to the general case by setting
= [^(1); ...; ^(N)], = [w^(1)^(1); ...; w^(N)^(N)],
_i = [w^(1)_i^(1); ...; w^(N)_i^(N)], and _i = blkdiag(w^(1)_i^(1), ..., w^(N)_i^(N)),
where blkdiag denotes a block diagonal matrix.
Recall that the Krylov subspace in (<ref>) is constructed to approximately solve the Newton equation and obtain the update direction Δ_i. This is equivalent to building a rank-r approximation ∇^2 q_i(_i) ≈_i _i _i^⊤ and computing the next iterate by
_i+1 = _1/2 ( - _i)^⊤_i _i _i^⊤ ( - _i) + ∇ f (_i)^⊤ ( - _i).
Here, the columns of _i ∈ℝ^n × r form an orthonormal basis for the Krylov subspace and _i ∈ℝ^r × r. Since ∇ f(_i) ∈ row() = col(^⊤ (_i + β_i ) ) for β_i>0 and the Krylov subspace always contains ∇ f(_i), the column space of _i _i _i^⊤ always contains ∇ f(_i). This means that the quadratic function (<ref>) is bounded from below <cit.> and admits a minimum. The iterate _i+1 is the minimum norm solution to (<ref>) given by
_i+1 = _i + Δ_i, where Δ_i = - _i _i^-1_i^⊤∇ f(_i).
Next, we state and prove some lemmas which will be used to prove the main theorem.
The update Δ_i generated by the iterative scheme (<ref>) satisfies
Δ_i ∈ row () ,
Δ_i^⊤∇^2 q_i(_i) Δ_i = Δ_i^⊤_i_i_i^⊤Δ_i.
Here, (<ref>) means that the update direction is in the row space of the linear model.
By construction, the Krylov subspace (<ref>) is a subspace of row (), and by (<ref>) we have Δ_i∈ col (_i). Thus we have Δ_i∈col(_i) ⊆row(), which proves (<ref>).
Consider the full representation of the Hessian of (<ref>) generated by the Krylov subspace method
∇^2 q_i(_i) = ^⊤ (_i + β_i) = [ _i _i ][ _i _1; _2 _3 ][ _i^⊤; _i^⊤ ],
where col(_i) ⊥ col(_i). We have
Δ_i^⊤∇^2 q_i(_i) Δ_i = Δ_i^⊤[ _i _i ][ _i _1; _2 _3 ][ _i^⊤; _i^⊤ ]Δ_i
= [ Δ_i^⊤_i 0 ][ _i _1; _2 _3 ][ _i^⊤Δ_i; 0 ], as Δ_i∈ col(_i),
=
Δ_i^⊤_i_i_i^⊤Δ_i,
which proves (<ref>).
The update Δ_i generated by (<ref>) satisfies the descent condition
∇ f(_i)^⊤Δ_i≤ - Δ_i^⊤^⊤ (_i + β_i) Δ_i.
Since _i+1 is a solution to (<ref>), for any t ∈ (0,1), we have
1/2Δ_i^⊤_i_i_i^⊤Δ_i + ∇ f(_i)^⊤Δ_i≤1/2 (tΔ_i)^⊤_i_i_i^⊤ (t Δ_i) + ∇ f(_i)^⊤ (t Δ_i).
By rearranging the terms, we have
(1-t^2)/2Δ_i^⊤_i_i_i^⊤Δ_i + (1-t) ∇ f(_i)^⊤Δ_i ≤ 0
(1+t)/2Δ_i^⊤_i_i_i^⊤Δ_i + ∇ f(_i)^⊤Δ_i ≤ 0
∇ f(_i)^⊤Δ_i ≤ - (1+t)/2Δ_i^⊤_i_i_i^⊤Δ_i.
Letting t → 1^-, we obtain
∇ f(_i)^⊤Δ_i≤ - Δ_i^⊤_i_i_i^⊤Δ_i.
Combining (<ref>) and (<ref>), we obtain (<ref>).
In the following lemma, we will make use of the fact that ∇ f is Lipschitz continuous. This is because the gradient of the log-sum-exp function is the softmax function, which is Lipschitz continuous <cit.>.
Let λ_ min be the smallest nonzero eigenvalue of ^⊤, and L be the Lipschitz constant for ∇ f. For line search parameter γ∈ (0,1) and
β_i ≥L/2λ_ min(1-γ),
the following Armijo line search condition holds
f(_i+1) ≤ f(_i) + γ∇ f(_i)^⊤ (_i+1-_i).
First, note that
(_i+1 - _i)^2__i + β_i ≥β_i (_i+1 - _i) _2^2 ≥β_i λ_ min (_i+1 - _i) _2^2.
Here, in the second step we used that (_i+1 - _i) ∈row() = row(^⊤) (Lemma <ref>), row(^⊤)^⊥ = null(^⊤), and λ_ min is the smallest nonzero eigenvalue of ^⊤.
Next, we have
f(_i+1) ≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) + L/2_i+1 - _i _2^2
≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) + β_i λ_ min (1-γ)_i+1 - _i _2^2
≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) + (1 - γ) (_i+1 - _i)^2__i + β_i
≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) - (1 - γ) ∇ f(_i)^⊤ (_i+1 - _i )
= f(_i) + γ∇ f(_i)^⊤ (_i+1 - _i).
Here, the first, second, thrid, and fourth steps use the Lipschitz continuity of ∇ f, (<ref>), (<ref>), and Lemma <ref>, respectively.
The iterative scheme (<ref>) generates a fixed point _* if and only if _* is a stationary point.
"⇐": Substituting ∇ f(_*) = 0 into (<ref>), we obtain Δ_*=0. Hence _* is a fixed point.
"⇒": Let = - _* for any . Since _* is a fixed point to (<ref>), we have, for any t ∈ℝ,
1/2 (t)^⊤_* _* _*^⊤ (t ) + ∇ f(_*)^⊤ (t )
≥1/2 (_*-_*)^⊤_* _* _*^⊤ (_*-_*) + ∇ f(_*)^⊤ (_*-_*).
Simplifying this, we obtain
t^2/2^⊤_* _* _*^⊤ + t ∇ f(_*)^⊤ ≥ 0
∇ f(_*)^⊤ ≥ - t/2^⊤_* _* _*^⊤.
Taking t → 0, we obtain ∇ f(_*)^⊤≥ 0 for any .
This implies ∇ f(_*) is a zero vector, that is, _* is a stationary point.
Now, we are ready to prove the main theorem.
The sequence { f(_i) }_i is decreasing because the update directions are descent directions (<Ref>) and the Armijo line search scheme guarantees sufficient descent at each step (<Ref>). By the continuity of f, it is closed <cit.>. Since f is closed and attains its infimum in ℝ,
the decreasing sequence { f(_i) }_i converges to a limit.
By the sufficient descent condition (<ref>), the convergence of { f(_i) }_i and α >0,
∇ f(_i)^⊤ (_i+1 - _i)
converges to zero. Hence, by (<ref>),
Δ_i^⊤^⊤ (_i + β_i ) Δ_i
converges to zero. Since (_i + β_i ) is positive definite and Δ_i ∈row() (<Ref>), Δ_i converges to the zero vector.
This implies that _i converges to a fixed point of (<ref>). By <Ref>, _i converges to a stationary point. By the convexity of f, _i converges to a global minimum.
§ NUMERICAL EXPERIMENTS
We perform two numerical experiments for minimizing the log-sum-exp function for a linear model. We compare the performance of the proposed LSEMINK with three commonly applied line search iterative methods and three disciplined convex programming (DCP) solvers; see <Ref>. In <Ref>, we consider multinomial logistic regression (MLR) arising in image classification. In <Ref>, we experiment with a log-sum-exp minimization problem arising in geometric programming.
The experimental results show that LSEMINK has much better initial convergence, is more robust and scalable compared with the comparing methods.
§.§ Benchmark Methods
We compare the proposed LSEMINK with three common line search iterative schemes and three DCP solvers for machine learning and geometric programming applications. Firstly, we implement a standard Newton-CG (NCG) algorithm with a backtracking Armijo line search. Secondly, we compare with an L^2 natural gradient descent (NGD) method <cit.> that approximately solves
min_1/2∇ f (_i)^⊤ ( - _i) + λ_i/2∑_k=1^N w^(k)^(k) ( - _i)_2^2,
using CG to obtain the next iterate, where λ_i controls the step size and is determined by a backtracking Armijo line search scheme, and the last term is a proximal term acting on the row space of the linear model. This scheme bears similarity to LSEMINK as the proximal term has the same effect as the shift in Hessian of LSEMINK. However, it does not make use of the Hessian and only approximates curvature information using the linear model. Thirdly, to demonstrate the effectiveness of the Hessian modification in LSEMINK, we compare with a standard modified Newton-Krylov (SMNK) scheme, which approximately solves (<ref>) with _i= using Lanczos tridiagonalization, which has the same iterates as CG up to rounding errors but allows computations for the update direction to be re-used during line search. For LSEMINK, the Newton equation (<ref>) is approximately solved by CG. We note that an update direction has to be re-computed for each attempted value of β_i during line search. In other words, unlike SMNK, the update direction computation cannot be re-used. However, our experimental results show that LSEMINK is still efficient in terms of computational cost thanks to the effectiveness of the modified Hessian. In each experiment, we use the same maximum number of iterations and tolerance for the CG and Lanczos schemes across different line search iterative methods.
In addition, we apply CVX <cit.>, a DCP package, paired with three different backend solvers (SPDT3 <cit.>, SeDuMi <cit.>, and MOSEK <cit.>). The best precision for CVX is used in the experiments; see <cit.> for detailed information.
Cost Measurement We measure the computational costs for different line search iterative methods in terms of work units. In particular, a work unit represents a matrix-vector product with the linear models or their transpose. This is because these computations are usually the most expensive steps during optimization. For instance, in the MLR experiments of <Ref>, the linear models ^(k)'s contain the propagated high dimensional features of all the training data. Note that the number of work units in one iteration can differ across different line search iterative methods since a different number of CG/Lanczos iterations or line search updates can be performed. In addition to work unit, we also compare computational costs for all methods in total runtime.
§.§ Experiment 1: Image Classification
Perhaps the most prominent example of log-sum-exp minimization is multinomial logistic regression (MLR) arising in supervised classification. Here, we experiment on an MLR problem for the classification of MNIST <cit.> and CIFAR-10 <cit.> image datasets. The MNIST dataset consists of 60,000 28 × 28 hand-written images for digits from 0 to 9. The CIFAR-10 consists of 60,000 32 × 32 color images equally distributed for the following ten classes: airplane, automobile,
bird, cat, deer, dog, frog, horse, ship, and truck. Example images for the two datasets are shown in <Ref> and <Ref>, respectively.
Problem Description
Let n_f be the number of features, n_c be the number of classes, and Δ_n_c be the n_c-dimensional unit simplex. Denote a set of data by {^(k), ^(k)}_k=1^N ⊂ℝ^n_f×Δ_n_c, where ^(k) and ^(k) are the input feature and target output label, respectively. In our experiments, we consider two feature extractors that enhance the features ^(k) by propagating it into a higher dimensional space ℝ^n_p. The first feature extractor is the random feature model (RFM) <cit.>. It applies a nonlinear transformation given by
_ RFM(^(k)) = σ(^(k) + ),
where σ is the element-wise ReLU activation function, ∈ℝ^n_p × n_f and ∈ℝ^n_p are randomly generated. The second feature extractor is performed by propagating the features through the hidden layers of a pre-trained AlexNet <cit.>. In particular, the AlexNet was pre-trained on the ImageNet dataset <cit.>, which is similar to the CIFAR-10 dataset, using MATLAB's deep neural networks toolbox. This procedure is also known as transfer learning. These feature extractors can empirically enhance the generalization of the model, i.e., the ability to classify unseen data correctly.
The goal of the supervised classification problem is to train a softmax classifier
s(, (^(k))) = exp((^(k)))/ 1_n_c 1_n_c^⊤exp((^(k)))
such that s(, (^(k))) ≈^(k). Here are model parameters, the exp and division are applied element-wise, 1_n_c is an n_c-dimensional vector of all ones, and : ℝ^n_f→ℝ^n_p is a feature extractor.
To this end, we first consider the sample average approximation (SAA) <cit.> of an MLR problem formulated as
min_∈ℝ^n_c × n_p F() = - 1/N∑_k=1^N ^(k)^⊤log( s(, (^(k))) )
= 1/N∑_k=1^N[ (^(k)^⊤ 1_n_c) log( 1_n_c^⊤exp((^(k))) ) - ^(k)^⊤(^(k)) ]
= 1/N∑_k=1^N[ log( 1_n_c^⊤exp((^(k))) ) - ^(k)^⊤(^(k)) ],
where the log operation is applied element-wise, and we use the fact that ^(k)^⊤ 1_n_c=1 since ^(k)∈Δ_n_c.
The feature extractor is assumed to be fixed since the focus is on the log-sum-exp minimization problem.
We vectorize the variable = vec () so that the MLR problem becomes
min_∈ℝ^n_cn_p f() =1/N∑_k=1^N[ log( 1_n_c^⊤exp(^(k)) ) - ^(k)^⊤^(k)],
which is of the form of (<ref>) and where ^(k) = (^(k))^⊤⊗_n_c.
Experimental Results In the MLR experiments, the line search iterative solvers stop when the norm of gradient is below 10^-14 or after 3,000 work units. We stop the CG and Lanczos scheme when the norm of the relative residual drops below 10^-3 or after 20 iterations.
We first perform a small-scale experiment in which only N=100 training data is used, and a random feature model with dimension m=1,000 is applied. Since under this setup the data can be fit perfectly to achieve a zero training error, the model predictions (<ref>) are close to standard basis vectors near an optimum. In this situation, the Hessian is close to a zero matrix, and the robustness of the solvers can be tested. The results are reported in <Ref> and <Ref>. In <Ref>, one of the results for the standard Newton-CG scheme is not shown, as it fails to converge near the end. This is because the Hessian vanishes and consequently, the second-order approximation is unbounded from below. The natural gradient descent method has the slowest convergence and has yet to converge at the end. Both the standard modified Newton-Krylov method and LSEMINK achieve the stopping criteria under the specified work units. In particular, LSEMINK has superior convergence where the objective function value is up to five orders of magnitude smaller than the second-best method during optimization. LSEMINK also has the fastest time-to-solution. This demonstrates the effectiveness of LSEMINK and the efficacy of its modified Hessian over the standard one. SeDuMi, particularly SDPT3, can achieve very accurate results, but their runtime is about 15 times more than the LSEMINK. MOSEK fails to obtain a solution.
We then experiment with n=50,000 training data and 10,000 validation data. For the MNIST dataset, we use an RFM to propagate the features to an m=1,000-dimensional space. For the CIFAR-10 dataset, features with dimension m=9,216 are extracted from the pool5 layer of a pre-trained AlexNet. Here different feature extractors are used for the two datasets because a better validation accuracy can be achieved. In <Ref>, the results for an MLR problem are illustrated. In <Ref>, we report the performance for an MLR problem with a Tikhonov regularization term α/2_2^2, where α = 10^-3.
Using our state-of-the-art laptop, the CVX solvers cannot complete the experiments within thirty minutes, while the line search methods finish in thirty seconds.
Hence, we focus on the latter methods in this test. The figures show that the L^2 natural gradient descent method is the slowest. The standard Newton-CG and standard modified Newton-Krylov have good convergence results on one dataset but not the other. In contrast, LSEMINK is very competitive on both datasets. Specifically, it has good initial convergence where the objective function value is up to an order of magnitude smaller than the second-best scheme in the first few iterations. Moreover, its
results are comparable with the other methods in terms of final training error, training accuracy, validation accuracy, and norm of gradient.
§.§ Experiment 2: Geometric Programming
We consider a log-sum-exp minimization problem which commonly arises in geometric programming <cit.> and is used to test optimization algorithms <cit.>. In particular, it is formulated as
min_ηlog( 1_m^⊤exp(( + )/η) ),
where ∈ℝ^n, ∈ℝ^m × n, and η controls the smoothness of the problem.
In particular, when η→ 0 the objective function converges to the point-wise maximum function max ( + ) and its Hessian vanishes.
We follow the experimental setups in <cit.>, which use m=100, n=20, and generate the entries of and randomly. We perform the experiments with small values of η to test the robustness of the methods. In particular, we test with η=10^-5, 10^-3, and 10^-1, respectively. We stop the line search iterative schemes after 10,000 work units. The CG and Lanczos schemes stop when the relative residual drops below 10^-3 or after 20 iterations.
The experimental results are shown in <Ref> and <Ref>. We see that the experiments are very challenging as the standard Newton-CG and all the CVX solvers cannot return a solution in some or all the experiments. In particular, the standard Newton-CG breaks in the first iteration in two of the experiments. This is because the quadratic approximation is unbounded from below. Both SeDuMi and SDPT3 fail in some of the experiments. MOSEK fails in all the experiments. When the CVX solvers succeed in returning a solution, they have significantly longer runtime (up to 60 times slower) compared to the line search methods. Similar to the previous experiments, L^2 natural gradient descent method has the slowest convergence and has yet to converge after the specified work units. The standard modified Newton-Krylov and LSEMINK are robust in the experiments and can return accurate solutions for η=10^-3 and 10^-1. This indicates the effectiveness of Hessian modification in handling challenging optimization problems. Moreover, LSEMINK converges faster than the comparing standard modified Newton-Krylov method in the early stage. This indicates the effectiveness of the proposed Hessian modification over the standard one. However, we see that when η=10^-3 and 10^-5, LSEMINK and all comparing methods cannot return a solution with the desired norm of gradient. This is because for a small η, the objective function is close to being nonsmooth. In contrast, the convergence of gradient based methods like LSEMINK requires the differentiability of the objective function.
§ CONCLUSION
We present LSEMINK, a modified Newton-Krylov algorithm tailored for optimizing the log-sum-exp function for a linear model. The novelty of our approach is incorporating a Hessian shift in the row space of the linear model. This does not change the minimizers and renders the quadratic approximation to be bounded from below and the overall scheme to provably converge to a global minimum under standard assumptions. Since the update direction is computed using Krylov subspace methods which only require matrix-vector products with the linear model, LSEMINK is applicable to large-scale problems. Numerical experiments on image classification and geometric programming illustrate that LSEMINK has significantly faster initial convergence than standard Newton-Krylov methods, which is particularly attractive in applications like machine learning, and considerably reduces the time-to-solution and is more scalable compared to DCP solvers and natural gradient descent. Also, LSEMINK is more robust to ill-conditioning arising from the nonsmoothness of the problem. We provide a MATLAB implementation at <https://github.com/KelvinKan/LSEMINK>.
§ ACKNOWLEDGEMENTS
This work was supported in part by NSF awards DMS 1751636, DMS 2038118, AFOSR grant FA9550-20-1-0372, and US DOE Office of Advanced Scientific Computing Research Field Work Proposal 20-023231. The authors would like to thank Samy Wu Fung for sharing the code for propagating the features of the CIFAR-10 dataset with AlexNet.
abbrv
|
http://arxiv.org/abs/2307.04994v1 | 20230711031608 | Evaluating Summary Statistics with Mutual Information for Cosmological Inference | [
"Ce Sui",
"Xiaosheng Zhao",
"Tao Jing",
"Yi Mao"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.IM"
] |
[
Evaluating Summary Statistics with Mutual Information for Cosmological Inference
equal*
Ce Suiyyy
Xiaosheng Zhaoyyy
Tao Jingyyy
Yi Maoyyy
yyyDepartment of Astronomy, Tsinghua University, Beijing 100084, China
Ce [email protected]
Statistical Inference, Mutual information
0.3in
]
The ability to compress observational data and accurately estimate physical parameters relies heavily on informative summary statistics. In this paper, we introduce the use of mutual information (MI) as a means of evaluating the quality of summary statistics in inference tasks. MI can assess the sufficiency of summaries, and provide a quantitative basis for comparison. We propose to estimate MI using the Barber-Agakov lower bound and normalizing flow based variational distributions. To demonstrate the effectiveness of our method, we compare three different summary statistics (namely the power spectrum, bispectrum, and scattering transform) in the context of inferring reionization parameters from mock images of 21 cm observations with Square Kilometre Array. We find that this approach is able to correctly assess the informativeness of different summary statistics and allows us to select the optimal set of statistics for inference tasks.
§ INTRODUCTION
Statistical inferences in cosmology consist of two parts. Firstly, summary statistics are selected to extract relevant information from raw observed data. These statistics are then used to infer parameters for a given physical model. The choice of summary statistics is crucial for obtaining better constraints on these physical parameters. Many summary statistics have been proposed to extract information from various types of astronomical datasets. For Cosmic Microwave Background (CMB) studies, analyses have largely focused on the power spectrum <cit.>. However, other cosmological fields, such as the 21 cm signal, are expected to be highly non-Gaussian, necessitating more informative summary statistics. As a result, new summary statistics have been proposed in 21 cm cosmology, such as the bispectrum <cit.> and Minkowski Functionals <cit.>. Furthermore, neural networks are being explored for learning summaries from input images in the context of cosmological inference <cit.>.
An important task is to predict and compare the effectiveness of different summary statistics in constraining target physical parameters. There is no universal way to do that in a given inference task, as summary statistics are often derived from different frameworks. Traditional approaches for comparing new statistics often involve posterior comparisons and Fisher analysis <cit.>. These methods typically consider one set of fiducial parameters. The former approach relies on inference results from mock observations and assesses the effectiveness of different statistics by examining their posteriors. The latter approach often uses Gaussian assumptions and calculates the Fisher information at the fiducial parameter. However, predictions obtained from these methods only consider a single point in the parameter space, which might not fully capture the overall performance of the statistics. One way to evaluate statistics across a large parameter space is through regression performance. This involves assessing the optimal performance that the statistics can achieve in predicting parameters across the entire parameter range <cit.>. However, this approach only considers point estimates and does not provide a comprehensive analysis of statistical performance.
A similar problem is extensively studied in the machine learning community in the context of representation learning. The goal of representation learning is to find a low-dimensional representation that preserves most of the information from the original data. It has been shown that such task can be framed as a problem of maximizing the mutual information (MI) between the learned summaries and the original data <cit.>. This suggests that we may use a similar metric to quantify the effectiveness of different summary statistics.
In this study, we introduce a novel approach to compare different summary statistics by estimating the mutual information between the statistics and target parameters in a given inference task. This method enables a quantitative comparison of the effectiveness of various statistics by considering their statistical dependence on parameters. Unlike many previous methods that focus on a single point in the parameter space, our approach takes into account the entire parameter range, providing a comprehensive evaluation of how well the summary statistics capture the necessary information for inference.
§ METHOD
In statistical inference we seek to estimate the parameters θ of a physical model given some input observation x. In Bayesian inference, this requires estimating the posterior distribution p(θ|x). However, the original observation is typically high-dimensional and contains a substantial amount of irrelevant information. To address this, we often utilize summary statistics to compress the data into more compact representations s that contain the most critical information about the parameter.
To achieve optimal performance, summary statistics should capture all the relevant information contained in the original observation x regarding the parameter of interest. This leads to the definition of sufficient statistics as those that satisfy the condition
p(θ|x,s)=p(θ|s),
which implies that once the summary statistics are given, the original data does not provide additional information. If a summary statistic closely resembles the sufficient statistic, it is likely to provide more accurate and reliable information about the underlying parameter.
§.§ Mutual information as a probe of Sufficient Statistics
Mutual information is a fundamental concept in statistical inference and information theory that measures the amount of information one random variable contains about another. Mutual information is defined as
I(x; y) =KL(p(x, y)||p(x) p(y))
=𝔼_p(x, y)[logp(x, y)/p(x) p(y)],
where x and y are random variables, and p(x, y), p(x), and p(y) are their joint and marginal probability distributions, respectively. Mutual information quantifies the difference between the joint distribution and the product of the marginal distributions using the Kullback-Leibler divergence (KLD). If x and y are independent, their mutual information is zero, indicating that knowing x does not provide any information about y. Conversely, if x contains information about y, mutual information is non-zero, and a higher value indicates a stronger dependence between the two variables.
Mutual information can also be used to define sufficient statistics. For Bayesian inference, we can show that Equation <ref> is equivalent to
I(θ;x)=I(θ;s(x)),
which implies that sufficient statistics s contain all the information about θ in the original observation x. Thus, by measuring the mutual information between a summary statistic and the target parameters, we can evaluate the sufficiency of the statistic. This interpretation provides a powerful tool for selecting summary statistics and assessing their suitability for use in statistical inference.
§.§ Mutual information estimation
Estimating mutual information is a challenging task in practice as it requires the computation of the KLD between complex distributions, which are often unknown. For many cosmological inference tasks we do not have access to a tractable joint distribution of the physical parameters and the data summaries, p(θ,x). Hence, it is infeasible to directly evaluate the mutual information between them. However, we can use variational distributions to approximate the real distributions, and obtain variational bounds of mutual information.
Assuming that we have a variational distribution q(θ|s), we can utilize it to replace the actual conditional distribution p(θ|s) and prove that it generates a lower bound on MI. Specifically, we have
I(θ ;s )= 𝔼_p(θ , s)[logp(θ | s)/p(θ)]
= 𝔼_p(θ , s)[logq(θ | s)p(θ | s)/p(θ)q(θ | s)]
= 𝔼_p(θ, s)[logq(θ | s)/p(θ)]
+𝔼_p(s)[K L(p(θ | s) q(θ | s))]
≥ 𝔼_p(θ, s)[log q(θ | s)]+h(θ),
where h(θ)=-𝔼_p(θ)[log p(θ)] is the differential entropy of θ and the last inequality is due to the non-negativity of the KLD. This is often referred as the Barber-Agakov lower bound <cit.>. It is evident from this equation that the replacement of the real conditional distribution with a variational model results in a lower bound on MI, which is only tight when p(θ | s)=q(θ | s). In cosmological inference problems, calculating the differential entropy of the prior distribution of θ is relatively straightforward as it is typically tractable. For the first component, we can fit a highly flexible generative model to a large number of parameter-statistic pairs, yielding a variational distribution that approximates the actual distribution closely. In this work, we use a masked autoregressive flow (MAF; ) as the variational distribution. We optimize MAFs by minimizing the negative log-probability, which is equivalent to maximizing the lower bound. In principle, alternative loss functions <cit.> can also be employed.
§.§ Relation with other methods
We can also consider other methods commonly used for comparing data summaries as processes of MI estimation. For instance, we can write MI as
I(θ ;s)= 𝔼_p(s)[K L(p(θ | s) p(θ))].
Using this expression, we can estimate MI by assuming p(θ | s) = p̂(θ | s_0), where we replace the true conditional distribution with a posterior distribution that we estimate at a particular observation s_0. In this case, the estimated MI becomes Î(θ ;s)=K L(p̂(θ | s_0) p(θ)), which measures the difference between the estimated posterior and the prior. This is similar to comparing different statistics by examining their posteriors on one mock observation. However, this estimation has high variance since it uses only one estimated posterior distribution to represent the conditional distribution.
Regression performance can also be utilized to formulate a Barber-Agakov lower bound. In one dimensional case if we let q(θ | s) = 𝒩(f(s), 𝔼_p(θ, s)[(θ-f(s))^2]) in equation <ref>, where f(s) is a estimator trained to predict θ from given summaries statistics s, then:
I(θ ;s) ≥ 𝔼_p(θ, s)[log q(θ | s)]+h(θ)
= h(θ)-1 / 2 log𝔼_p(θ, s)[(θ-f(s))^2]
-1 / 2 log (2 π e),
where a smaller mean square error produces a larger MI lower bound. This implies that summary statistics that can more accurately predict the parameters in a regression task may contain more information about those parameters. Note that if we further assume the estimator is unbiased and efficient, we can also derive a similar relation between MI and Fisher information <cit.>. In our experiments, we train Light Gradient Boosting Machines (LGBM; ) to predict parameters directly from data summaries. We use the regression performance to validate the MI estimation from MAFs.
§ DATA
As a demonstration of the this method, we consider an inference problem in 21 cm cosmology, where we need to constrain two Reionization parameters based on mock SKA (Square Kilometre Array) images. The 21 cm lightcones are simulated using the publicly available code 21cmFAST[https://github.com/andreimesinger/21cmFAST] <cit.>. The simulations were performed on a cubic box of 100 comoving Mpc on each side, with 66^3 grid cells. In this work, we use coeval boxes at redshift 11.76. We consider the following reionization parameters in simulations and inferences:
(1) ζ, the ionizing efficiency. We vary ζ as 10 ≤ζ≤ 250.
(2) T_ vir, the minimum virial temperature of halos that host ionizing sources. We vary this parameter as 4 ≤log _ 10 ( T_ vir / K ) ≤ 6.
To include observational effects, signals with three different levels of signal contamination are considered here:
(i) signal with k_⊥ = 0 mode removal (removal of the mean for each frequency slice);
(ii) signal with the SKA thermal noise;
(iii) signal with the SKA thermal noise + residual foreground after foreground removal with the singular value decomposition.
The thermal noise is produced using the Tools21cm[https://github.com/sambit-giri/tools21cm] <cit.> package by considering the SKA1-Low configuration. The foreground simulation is based on the GSM-building model in <cit.>. We generate 10000 data cubes for each case with different reionization parameters and randomized observational effects.
In this work, we try to evaluate the effectiveness of three different summary statistics in the inference reionization parameters: power spectrum (PS), bispectrum (BS) and scattering transform (ST). For BS, we find that it may contain many uninformative features, which can cause overfitting issues. To address this problem, we conduct feature selection using LGBM by comparing feature importance for parameter estimation in regression tasks. Note that the MI estimates after feature selection are still lower bounds due to data processing inequality. The details of these summary statistics and the process of feature selection are given in Appendix <ref>.
§ RESULTS
Comparisons between different summary statistics: We present our MI estimation results for different data summaries and contamination levels in Figure <ref>. Our results indicate that the presence of observational effects can significantly reduce the amount of information available about reionization parameters in our mock images. As we increase the level of contamination, the estimated MI decreases accordingly. Furthermore, we find that in all three datasets used in our experiments, the ST is better at extracting physical information than correlation functions. This is an expected result since ST is designed to capture more spatial information <cit.>. The performance of the correlation functions is consistent with previous studies <cit.>, where PS and BS are evaluated in a similar inference task. We also notice that the MI estimation of BS in the highest contamination case is negative, while MI should be non-negative by definition. This is because we are estimating a lower bound and the variational distribution is not exact. This result indicates that BS in this case provides nearly no information about the reionization parameters.
Validation with regression tasks: Our results are largely consistent with theoretical interpretations and previous works. To further validate their correctness, we trained LGBM to predict reionization parameters and evaluated the regression performance for each dataset. As mentioned in Section <ref>, regression performance can also be used as an estimator for mutual information by selecting a specific variational distribution. We used the R^2 score to evaluate the regression performance of LGBM and present the results in Figure <ref>, where we also plotted the previous MI estimates. The two MI estimators are observed to be consistent with each other. However, we note that the R^2 score was unable to show the difference between summary statistics when mutual information was relatively high.
§ SUMMARY
Our study presents a practical framework for evaluating the effectiveness of summary statistics in a specific inference problem. We accomplish this by estimating the mutual information between these statistics and the target parameters. Unlike existing approaches that rely on inference performance assuming fiducial parameters, our method provides a more robust assessment. We validate this methodology by applying it to the task of inferring reionization parameters from simulated SKA images. Our results demonstrate that the MI estimates agree with previous works and regression-based verification. This novel framework introduces a valuable tool for evaluating the informativeness of summary statistics in the field of cosmology.
§ ACKNOWLEDGEMENTS
This work is supported by the National SKA Program of China (grant No. 2020SKA0110401), NSFC (grant No. 11821303), and the National Key R&D Program of China (grant No. 2018YFA0404502). CS thanks Richard Grumitt for useful comments. We acknowledge the Tsinghua Astrophysics High-Performance Computing platform at Tsinghua University for providing computational and data storage resources that have contributed to the research results reported within this paper.
tom2023
§ SUMMARY STATISTICS
In this work, we try to evaluate the effectiveness of three different summary statistics in the inference of reionization parameters: power spectrum, bispectrum and scattering transform.
Power spectrum (PS) P_21(k) is the most commonly used statistic, defined as:
⟨δ_21(k) δ_21(k^')⟩=(2 π)^3 δ^D(k+k^') P_21(k),
where δ_21(k) is the Fourier transform of the 21 cm brightness temperature field, δ^D is the Dirac delta function and ⟨...⟩ represents an ensemble average. We use the Tools21cm <cit.> package to calculate the spherical averaged power spectrum of the mock observation.
Bispectrum (BS) is the Fourier dual of three-point correlation function, defined as:
⟨δ_21(k_1) δ_21(k_2) δ_21(k_3)⟩
=(2 π)^3 δ(k_1+k_2+k_3) B(k_1, k_2, k_3).
The BS is a function of three k vectors that form a closed triangle (k_1+k_2+k_3=0). In this work, we consider only isosceles triangles and use the python package Pylians[https://github.com/franciscovillaescusa/Pylians3]<cit.> to calculate the bispectrum of mock observation. We normalize BS following <cit.> to separate non-Gaussian information. In our experiments we also consider a combination PS and BS by directly concatenating them and refer it as BS&PS. For BS, we observe that it may contain some uninformative features (here each feature represents the bispectrum value calculated for a specific triangle configuration), which can lead to overfitting issues. To tackle this problem, we employ feature selection using the LGBM. We first add random noise features to the bispectrum and conduct regression with LGBM. The regression process is repeated multiple times to calculate the mean and variance of the importance of the random noise features. Subsequently, we identify all features that fall within the 3-sigma range of the importance of the random features and exclude them from the fitting of MAFs.
The solid harmonic wavelet scattering transform (ST), first introduced by <cit.>, is a method for compressing data for inference by convolving the original fields with a cascade of solid harmonic wavelets, performing non-linear modulus on the convolved fields, and integrating over all coordinates. It is an effective way to capture information at different scales and orientations, producing coefficients that are invariant to both translation and rotation. The ST is implemented with the Kymatio [https://www.kymat.io/] <cit.> package.
|
http://arxiv.org/abs/2307.07320v1 | 20230714125547 | Adaptive Linear Estimating Equations | [
"Mufang Ying",
"Koulik Khamaru",
"Cun-Hui Zhang"
] | math.ST | [
"math.ST",
"cs.LG",
"stat.ML",
"stat.TH"
] |
PART:
Representation Learning With Hidden Unit Clustering For Low Resource Speech Applications
Varun Krishna, Student Member, IEEE, Tarun Sai, Student Member, IEEE, and
Sriram Ganapathy, Senior Member, IEEE,
V. Krishna, S. Tarun and S. Ganapathy are with the Learning and Extraction of Acoustic Patterns (LEAP) lab, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India, 560012.
e-mail: {varunkrishna, tarunsai, sriramg}@iisc.ac.in
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================
Sequential data collection has emerged as a widely adopted technique for enhancing the efficiency of data gathering processes. Despite its advantages, such data collection mechanism often introduces complexities to the statistical inference procedure. For instance, the ordinary least squares (OLS) estimator in an adaptive linear regression model can exhibit non-normal asymptotic behavior, posing challenges for accurate inference and interpretation. In this paper, we propose a general method for constructing debiased estimator which remedies this issue. It makes use of the idea of adaptive linear estimating equations, and we establish theoretical guarantees of asymptotic normality, supplemented by discussions on achieving near-optimal asymptotic variance. A salient feature of our estimator
is that in the context of multi-armed bandits, our estimator retains the non-asymptotic performance of the least square estimator while obtaining asymptotic normality property. Consequently, this work helps connect two fruitful paradigms of adaptive inference: a) non-asymptotic inference using concentration inequalities and b) asymptotic inference via asymptotic normality.
§ INTRODUCTION
Adaptive data collection arises as a common practice in various scenarios, with a notable example being the use of (contextual) bandit algorithms. Algorithms like these aid in striking a balance between exploration and exploitation trade-offs within decision-making processes, encompassing domains such as personalized healthcare and web-based services <cit.>. For instance, in personalized healthcare, the primary objective is to choose the most effective treatment for each patient based on their individual characteristics, such as medical history, genetic profile, and living environment. Bandit algorithms can be used to allocate treatments based on observed response, and the algorithm updates its probability distribution to incorporate new information as patients receive treatment and their response is observed. Over time, the algorithm can learn which treatments are the most effective for different types of patients.
Although the adaptivity in data collection improves the quality of data, the sequential nature (non-iid) of the data makes the inference procedure quite challenging <cit.>. There is a lengthy literature on the problem of parameter estimation in the adaptive design setting. In a series of work <cit.>, the authors studied the consistency of the least squares estimator for an adaptive linear model. In a later work, Lai <cit.> studied the consistency of the least squares estimator in a nonlinear regression model. The collective wisdom of these papers is that, for adaptive data collection methods, standard estimators are consistent under a mild condition on the maximum and minimum eigenvalues of the covariance matrix <cit.>. In a more recent line of work <cit.>, the authors provide a high probability upper bound on the ℓ_2-error of the least squares estimator for a linear model. We point out that, while the high probability bounds provide a quantitative behavior of OLS, these results assume a stronger sub-Gaussian assumption on the noise variables.
The problem of inference, i.e. constructing valid confidence intervals, with adaptively collected data is much more delicate. Lai and Wei <cit.> demonstrated that for a unit root autoregressive model, which is an example of adaptive linear regression models, the least squares estimator doesn't achieve asymptotic normality. Furthermore, the authors showed that for a linear regression model, the least squares estimator is asymptotically normal when the data collection procedure satisfies a stability condition. Concretely, letting _i denote the covariate associated with i^th sample, the authors require
^-1
where = ∑_i = 1^n _i _i^⊤ and {}_n ≥ 1 is a
sequence of non-random positive definite matrices.
Unfortunately, in many scenarios, the stability condition (<ref>)
is violated <cit.>. Moreover, in practice, it might be difficult to verify whether the stability condition (<ref>) holds or not. In another line of research <cit.>, the authors assume knowledge of the underlying data collection algorithm and provide asymptotically valid confidence intervals. While this approach offers intervals under a much weaker assumption on the underlying model, full knowledge of the data collection algorithm is often unavailable in practice.
Connections to online debaising based methods:
In order to produce valid statistical inference when the stability condition <ref> does not hold, some authors <cit.> utilize the idea of online debiasing. At a high level, the online debiased estimator
reduces bias from an initial estimate (usually the least squares estimate) by adding some correction terms, and the online debiasing procedure does not require the knowledge of the data generating process. Although this procedure guarantees asymptotic reduction of bias to zero, the bias term's convergence rate can be quite slow.
In this work, we consider estimating the unknown parameter in an adaptive linear model by using a set of Adaptive Linear Estimating Equations (ALEE). We show that our proposed ALEE estimator achieves asymptotic normality without knowing the exact data collection algorithm while addressing the slowly decaying bias problem in online debiasing procedure.
§ BACKGROUND AND PROBLEM SET-UP
In this section, we provide the background for our problem and set up a few notations. We begin by defining the adaptive data collection mechanism for linear models.
§.§ Adaptive linear model
Suppose a scalar response variable y_t is linked to a covariate vector _t ∈^d at time t via the linear model:
y_t = _t^⊤^* + ϵ_t for t ∈ [n],
where ^* ∈^d is the unknown parameter of interest.
In an adaptive linear model, the regressor _t at time t is assumed to be a (unknown) function of the prior data point {_1, y_1, …, _t - 1, y_t - 1} as well as additional source of randomness that may be present in the data collection process. Formally, we assume there is an increasing sequence of σ-fields {_t }_t ≥ 0 such that
σ(_1, y_1, … ,_t - 1, y_t - 1, _t) ∈_t - 1 for t ∈ [n].
For the noise variables {_t }_t ≥ 1 that appear in equation (<ref>), we impose the following conditions
[_t|_t-1]=0, [_t^2|_t-1]=σ^2 and sup_t≥ 1[|_t/σ|^2+δ|_t-1] < ∞.
The above condition is relatively mild compared to a sub-Gaussian condition.
Examples of adaptive linear model arise in various problems, including multi-arm and contextual bandit problems, dynamical input-output systems, adaptive approximation schemes and time series models. For instance, in the context of the multi-armed bandit problem, the design vector _t is one of the basis vectors {_k}_k ∈ [d], representing an arm being pulled, while ^*, y_t represent the true mean reward vector and reward at time t, respectively.
§.§ Adaptive linear estimating equations
As we mentioned earlier, the OLS estimator can fail to achieve asymptotic normality due to the instability of the covariance matrix with adaptively collected data. To get around this issue, we consider a different approach ALEE (adaptive linear estimating equations). Namely, we obtain an estimate by solving a system of linear estimating equations with adaptive weights,
∑_t=1^n _t(y_t - _t^⊤) = .
Here the weight _t ∈^d is chosen in a way that _t ∈_t -1 for t ∈ [n]. Let us now try to gain some intuition behind the construction of ALEE. Rewriting equation (<ref>), we have
{∑_t=1^n _t _t }· ( - ^⋆) =
∑_t = 1^n _t _t.
Notably, the choice of _t ∈_t - 1 makes ∑_t = 1^n _t _t the sum of a martingale difference sequence. Our first theorem postulates conditions on the weight vectors {_t }_t ≥ 1 such that the right-hand side of (<ref>) converges to normal distribution asymptotically. Throughout the paper, we use the shorthand _t = (_1,…,_t)^⊤∈^t× d,
_t =(_1,…,_t)^⊤∈^t× d.
Suppose condition (<ref>) holds and the predictable sequence {_t}_1 ≤ t ≤ n satisfies
max_t≤ n_t_2 = o_p (1) and _d - _n^⊤_n _ = o_p (1).
Let _w = _w_w^⊤_n with _n = _w _w _w^⊤ being the SVD of _n. Then,
_w( - ^*) / σd⟶( 0,_d),
where σ is any consistent estimator for σ.
Proof.
Invoking the second part of the condition (<ref>), we have that _w is invertible for large n, and _w_w^-1_w^⊤ -_d_=o_p(1). Utilizng the expression (<ref>), we have
_w( - ^*)/σ = _w_w^-1_w^⊤_n^⊤_n( - ^*)/ σ = _w_w^-1_w^⊤∑_t=1^n _t _t/σ.
Invoking the stability condition on the weights {_t} and using the fact that ∑_t=1^n _t _t is a martingale difference sequence, we conclude from martingale central limit theorem that
∑_t=1^n _t _t /σ( 0,_d).
Combining the last equation with
_w_w^-1_w^⊤ -_d_=o_p(1) and using Slutsky's theorem yield
_w( - ^*)/σ( 0,_d).
The claim of Proposition <ref> now follows from Slutsy's theorem.
A few comments regarding the Proposition <ref> are in order. Straightforward calculation shows
_w^⊤_w = _n^⊤_w _n ≼_n, where _w = _n (_n^⊤_n)^-1_n^⊤.
In words, the volume of the confidence region based on (<ref>) is always larger than the confidence region generated by the least squares estimate. Therefore, the ALEE-based inference, which is consistently valid, exhibits a reduced efficiency in cases where both types of confidence regions are valid. Compared with the confidence region based on OLS, the advantage of the ALEE approach is to provide flexibility in the choice of weights to guarantee the validity of the CLT conditions (<ref>).
Next, note that the matrix _w is an asymptotically equivalent to the matrix _n^⊤_n (see equation (<ref>)) under the stability condition (<ref>). The benefit of this reformulation is that it helps us better understand efficiency of ALEE compared with the OLS. This has led us to define a notion of affinity between the weights {_t }_t ≥ 1 and covariates {_t}_ t ≥ 1 for better understanding of the efficiency of ALEE and ways to design nearly optimal weights, as it will be clear in the next section.
Finally, it is straightforward to obtain a consistent estimate for σ. For instance, assuming log(λ_max(_n^⊤_n)) / n a.s.⟶ 0 and the noise condition (<ref>), we have
σ^2 := 1/n∑_t = 1^n( y_t - _t^⊤_LS)^2 a.s.⟶σ^2.
See <cit.> for a complete proof of the result;
here, _LS is the least squares estimate.
§ MAIN RESULTS
In this section, we propose methods to construct weights {_t}_t ≥ 1 which satisfy the stability property (<ref>), and study the resulting ALEE. Section <ref> is devoted to the multi-arm bandit case, Section <ref>
to an autoregressive model, and Section <ref>
to the contextual bandit case. Before delving into details, let us try to understand intuitively how to construct weights that have desirable properties.
The expression (<ref>) reveals that the efficiency of ALEE depends on the projection of the data matrix _n on _n. Thus, the efficiency of the approach can be measured by the principal angles between the random projections _w in (<ref>) and _x=_n_n^-1_n^⊤.
Accordingly, we define the affinity (_n, _n) of the weights {_t }_t ≥ 1 as the cosine of the largest principle angle, or equivalently
(_n, _n)
= σ_d(_x _w)
= σ_d(_w^⊤_n_n^-1/2)
as the d-th largest singular value of _x_w.
Formally, the above definition captures the cosine of the angle between the two subspaces
spanned by the columns of _n and _n, respectively <cit.>.
Good weights {_t }_t ≥ 1 are those with relatively large affinity or
_w ∝_n_n^-1/2 (approximatly).
§.§ Multi-arm bandits
In the context of the -arm bandit problem, the sample covariance matrix has a diagonal structure, which means that we can focus on constructing weights {_t }_t ≥ 1 for each coordinate individually. For an arm k ∈ [K] and round t ≥ 1, define
s_t,k = s_0 + ∑_i = 1^t x_i,k^2 for some positive s_0 ∈_0.
Define the k^th coordinate of the weight _t as
w_t, k = f(s_t,k/s_0)x_t, k/√(s_0) with f(x) = √(log2/x·log(e^2 x)· (loglog(e^2 x))^2).
The intuition behind the above construction is follows. The discussion at near equation (<ref>) indicates that the k^th coordinate of _t should be proportional to x_t,k/(∑_i ≤ nx^2_i,k)^1/2. However, the weight _t is required to be predictable, which can only depend on the data points [Note that _t, k∈_t - 1 can be used to construct _t] up to time t. Consequently, we approximate the sum ∑_i ≤ nx^2_i,k by the partial sum
s_t, k in (<ref>). Finally, note that
w_t, k = f(s_t, k/s_0) ·x_t,k/√(s_0)≈x_t, k/√(s_t, k).
The logarithmic factors in (<ref>) ensure that the stability conditions (<ref>) hold. In the following theorem, we generalize the above method as a general strategy for constructing weights {_t}_t ≥ 1 satisfying the stability condition (<ref>).
§.§.§ Stable weight construction strategy
Let f(x)> 0 be a deterministic decreasing function satisfying
∫_1^∞ f^2(x)dx = 1 and ∫_1^∞ f(x)dx = ∞.
With s_0 ∈_0, we define weights w_t, k as
w_t, k = f(s_t, k/s_0) x_t,k/√(s_0) with s_t,k = s_0 + ∑_i = 1^t x_i,k^2.
A key condition that ensures the weights {w_t,k}_t≥ 1 satisfy the desirable stability property (<ref>) is
max_t≤ nf^2(s_t,k/s_0)x_t,k^2/s_0 + max_1≤ t≤ n(1 - f(s_t,k/s_0)/f(s_t-1,k/s_0)) + ∫_s_n,k/s_0^∞ f^2(x) dx = o_p(1).
For multi-armed bandits, this condition is automatically satisfied when both quantities 1/s_0 and s_0/s_n,k converge to zero in probability. Putting together the pieces we have the following result for multi-armed bandits.
Suppose (<ref>) holds, and for some k ∈ [K], 1/s_0 and s_0/s_n,k converge to zero in probability. Then, the k^th coordinate ,
obtained using weights from equation (<ref>), satisfies
( - θ^*_k) ·∫_1^s_n,k/s_0√(s_0)/σ f(x)dx (0, 1),
where σ is a consistent estimate of σ. Equivalently,
1/σ (∑_1 ≤ t ≤ n w_t,k^2)^1/2(∑_ t =1^n w_t,k x_t,k) · (-θ_k^*) (0, 1).
See the Appendix for a proof of this theorem.
A few comments regarding Theorem <ref> are in order. First, when an asymptotically optimal allocation rule is used to achieve the optimal regret in <cit.> with sample size s_n,k≍log n for each suboptimal arm, or a sub-optimal rule is used to achieve s_n,k= polylog(n). The classical martingale CLT applies to the optimal arm (if unique) for such asymptotically optimal or sub-optimal allocation rules, and one may obtain a valid confidence interval for the optimal arm from the standard OLS estimator <cit.>. However, such CIs are not guaranteed for suboptimal arms. The above theorem allows us to construct CIs for all the arms.
Next, while Theorem <ref> holds for any s_0 diverging to infinity but of smaller order than s_n,k, possibly dependeing on k, the rate of convergence of ∑_t = 1^n w_t, kϵ_k to normality is improved by choosing large s_0. In practice, one should choose s_0 slightly smaller than the best known lower bound of s_n,k.
Finally, The choice of function f determines the efficiency of ALEE estimator. For instance, taking function f = 1/x, we obtain an estimator with asymptotic variance of order 1/{s_0 log^2(s_n,k/s_0 )}, which is only better than what one would get using stopping time results by a logarithmic factor. In the next Corollary, an improved choice of f yields near optimal variance up to logarithmic terms.
Under the same set of assumptions in Theorem <ref>. The ALEE estimate, obtained by using f(x) = (βlog^β2)^1/2{x(log e^2x)(loglog e^2 x)^1 + β}^-1/2 for any β > 0 satisfies
√(4β(log2)^β/log (s_n,k/s_0){loglog (s_n,k/ s_0)}^1+β)·√(s_n, k)(-θ^*)/σ(0, 1).
The proof of this corollary follows directly from Theorem <ref>. For s_0=(log n)/(loglog n) in multi-armed bandits with asymptotically optimal allocations, log(s_n,k/s_0)=(1+o(1))loglog s_n,k.
§.§.§ Finite sample bounds for ALEE estimators
One may also construct finite sample confidence intervals for each arm via applying concentration bounds. Indeed, for any arm k ∈ K, we have
{∑_t = 1^n w_t, k x_t, k}·(-) = ∑_t = 1^n w_t, k_t.
Following the construction of w_t, k∈_t - 1, the term ∑_t = 1^n w_t, k_t is amenable to concentration inequalities if we assume that the noise _t is sub-Gaussian, i.e.
∀λ∈ [e^λ_t|_t-1] ≤ e^^2 λ^2 /2 .
Suppose the sub-Gaussian noise condition (<ref>) is in force. Then for any δ > 0 and λ_0 > 0, the following bound holds with probability at least 1- δ
|∑_t = 1^n w_t,k x_t,k|· | - | ≤√( (λ_0 + ∑_t = 1^n w_t,k^2) ·log( λ_0 + ∑_t = 1^n w_t,k^2 /δ^2 λ_0 ) ).
Combining Corollary <ref> with β = 1 and Corollary <ref> with λ_0 = 1, we conclude that with probability at least 1 - δ
| - | ≤√(log( 2/δ^2 ) ) √(2 + log(s_n,k/s_0))log{2 + log(s_n,k/s_0)}/√(s_n,k) - √(s_0)
provided s_0 > 1. Recall that √(s_n, k) = (s_0 + ∑_t ≤ n x_t, k^2)^1/2, the bound is in the same spirit as existing finite sample bounds for the OLS estimator for arm means <cit.>.
Put simply, the ALEE estimator behaves similar to an OLS estimator non-asymptotically while retaining asymptotic normality.
§.§ Autoregressive time series
Next, we focus on an autoregressive time series model
y_t = θ^* y_t-1+ ϵ_t for t=1, …, n .
Note that the above model is a special case of the adaptive linear model (<ref>). It is well-known that when θ^* ∈ (-1, 1), the time series model (<ref>) satisfies a stability assumption (<ref>). Consequently, one might use the OLS estimate based confidence intervals <cit.> for θ^*. However, when θ^* = 1 — also known as the unit root case — stability condition (<ref>) does not hold, and the least squares estimator is not asymptotically normal <cit.>. In other words, when θ^* = 1, the lease squares based intervals do not provide correct coverage.
In this section, we apply ALEE based approach to construct confidence intervals that are valid for θ^* = 1. Similar to previous sections, let s_0∈_0 and denote s_t = s_0 + ∑_1≤ i≤ t y_i-1^2.
Following a construction similar to the last section, we have the following corollary.
Consider a positive decreasing function f satisfying (<ref>) and assume a similar condition to (<ref>) holds:
max_t≤ nf^2(s_t/s_0)y_t-1^2/s_0 + max_1≤ t≤ n(1 - f(s_t/s_0)/f(s_t-1/s_0)) + ∫_s_n/s_0^∞ f^2(x) dx = o_p(1).
Then, using weight w_t = f(s_t/s_0) y_t-1/√(s_0), the ALEE estimate satisfies
( - θ^*) ·∫_1^s_n/s_0√(s_0)/σ f(x)dx (0, 1),
where σ is a consistent estimate of σ.
Corollary (<ref>) follows from the proof of Theorem <ref>.
§.§ Contextual bandits
In contextual bandit problems, the task of defining adaptive weights satisfying
the stability condition (<ref>) while maintaining affinity large is challenging. Without loss of generality, we assume that _t_2≤ 1. Following the discussion around (<ref>) and using _t as an approximation of _n, we see that a good choice for the weight is _t ≈_t _t^-1/2. However, it is not all clear at the moment why the above choice produces d-dimensional weights _t satisfying the stability condition (<ref>). It turns out that the success of our construction is based on the variability of certain matrix _t. For
F_0-measurable d× d
positive-definite matrix _0
and t ∈ [n], we define
_t = _0 + ∑_i = 1^t _i _i^⊤ and _t =
_t - 1^-1/2_t.
For t∈[n], we define the variability matrix _t as
_t = ( + ∑_i = 1^t
_i_i^⊤)^-1 .
The variability matrix _t comes up frequently in finite sample analysis of the least squares estimator <cit.>, the generalized linear models with adaptive data <cit.>, and in online optimization <cit.>; see comments after Theorem <ref> for a more detailed discussion on the matrix _t. Now, we define weights {_t}_t ≥ 1 as
_t = √(1 + _t^⊤_t-1_t)·_t _t .
Suppose (<ref>) holds, λ_min(_0) ∞ and
λ_min(_n^-1) ∞. Then, the estimate , obtained using the weights {_t}_1≤ t≤ n from (<ref>), satisfies
1/σ(∑_t = 1^n _t _t ) · ( - ^*) d⟶( 0, _d).
See the Appendix for the proof of Theorem <ref>. To better convey the idea of our construction, we provide a lemma here which applies to weights _t generated by (<ref>) and (<ref>) with general _t.
Let _t be as in (<ref>) with the variability matrix _t in (<ref>). Then,
∑_t = 1^n _t _t^⊤
= _d - _n, max_1≤ t≤ n_t_2
=max_1≤ t≤ n_t-1_t_2/(1+_t^⊤_t-1_t)^1/2.
For _t∈ F_t-1, the stability condition
(<ref>) holds
when max_1≤ t≤ n_t^⊤_t_t
+ _n_ = o_p(1).
For any t ≥ 1,
_t = _t-1 - _t-1_t_t^⊤_t-1/(1+_t^⊤_t-1_t). It follows that
_t_t = _t-1_t/(1+_t^⊤_t-1_t) and
∑_t = 1^n _t _t^⊤
= ∑_t=1^n _t-1(_t^-1-^-1_t-1)_t = _d - _n.
Comments on Theorem <ref> conditions: It is instructive to compare the conditions of Theorem <ref> and Theorem <ref>. The condition λ_min(_0) p⟶∞ is an analogue of the condition s_0 p⟶∞. The condition λ_min(_n^-1) p⟶∞ is a bit more subtle. This condition is an analogue of the condition s_n, k / s_0 p⟶∞. Indeed, applying elliptical potential lemma <cit.> yields
log((_0 + _n ))/log((_0))≤trace(_n^-1) - d = ∑_i = 1^n _i^⊤_i - 1^-1_i
≤ 2 ·log((_0 + _n ))/log((_0))
where _n = ∑_t = 1^n _t _t^⊤ is the sample covariance matrix. We see that for λ_min(_n^-1) →∞, it is necessary that the sample covariance matrix _n grows to infinity at a faster rate than _0. Additionally, in dimension d =1, the condition λ_min(_n^-1) p⟶∞ is equivalent to s_n, k / s_0 p⟶∞.
Comments on variance of ALEE: The variance of is determined by the matrix ∑_t = 1^n _t _t. Substituting the value of _t, we have
∑_t = 1^n _t _t = ∑_t = 1^n √(1 + _t^⊤_t-1_t)·_t _t - 1^-1/2_t _t^⊤.
From (<ref>), it is not very restrictive to choose _0 to ascertain max_1≤ t≤ n_t^⊤_t _t = o_p(1).
Besides, utilizing the elliptical potential lemma (<ref>) gives λ_min(_n) ≥ (trace(_n^-1))^-1≥(d + 2log((_0 + _n ))/log((_0)))^-1. Thus far, we have
trace(∑_t = 1^n _t _t )
≥( d + 2log((_0 + _n ))/log((_0)) )^-1·trace(∑_t = 1^n _t - 1^-1/2_t _t^⊤)
+ smaller order terms.
In order to lower bound the right most term above, we first note that _t^-1≽_n^-1 for all t ≤ n. We now have
trace(∑_t = 1^n _t - 1^-1/2_t _t^⊤)
≥∑_t = 1^n _t^⊤_n^-1/2_t
= trace(_n · ( _0 + _n )^-1/2) ≈trace(_n^1/2)
where the last approximation follows from the fact that _n grows at a faster rate compared to _0 (see the discussion in the previous paragraph). Putting together the pieces, we conclude
trace(∑_t = 1^n _t _t ) ≥trace(_n^1/2) ·( d + 2log((_0 + _n ))/log((_0)))^-1_logarithmic in eigenvalues of _n + smaller order terms
Comparing the last term to the lower bound <cit.> we see that this is what one could hope for.
In other words, the factor involving d + log(_n) ≤ d(1 + log(λ_max(_n))) is unavoidable in general.
§ NUMERICAL EXPERIMENTS
In this section, we consider three settings: two-armed bandit setting, first order auto-regressive model setting and contextual bandit setting. In two-armed bandit setting, the rewards are generated with same arm mean (θ_1^*, θ_2^*) = (0.3, 0.3), and noise is generated from a normal distribution with mean 0 and variance 1. To collect two-armed bandit data, we use -Greedy algorithm with decaying exploration rate √(log(t)/t). The rate is designed to make sure the number of times each armed is pulled has order greater than log(n) up to time n. In the second setting, we consider the time series model,
y_t = θ^* y_t-1 + _t,
where θ^* = 1 and noise _t is drawn from normal distribution with mean 0 and variance 1. In the contextual bandit setting, we consider the true parameter ^* to be the same as in the two-armed bandit setting. When t≤ 10, a random context _t is generated from a uniform distribution in 𝒮^1. After time step t = 10, we apply -Greedy algorithm to these 10 contexts with decaying exploration rate log^2(t)/t. For all of the above three settings, we run 1000 independent replications.
To analyze the data we collect for these settings, we apply ALEE approach with weights specified in Corollary <ref> and equation <ref>, respectively. More specifically, in the first two settings, we consider β = 1 in Corollary <ref>. For two-armed bandit example, we set s_0 = e^2log(n), which is known to be a lower bound for s_n,1. In the AR(1) model, we set s_0 = e^2 n. For the contextual bandit example, we consider _0 = log(n) ·_d. In the simulations, we also compare ALEE approach to the normality based confidence interval for OLS estimator <cit.> (which may be incorrect), the concentration bounds for the OLS estimator based on self-normalized Martingale sequence <cit.>, and W-decorrelation <cit.>. Detailed implementations about these methods can be found in the Appendix.
In Figure <ref>, we display results for two-armed bandit example, providing the empirical coverage plots for the first arm mean θ_1^* as well as average width for two-sided CIs. We observe that CIs based on OLS undercover θ_1^* while other methods provide satisfactory coverage. Notably, from the average CI width plot, we can see that W-decorrelation and concentration methods have relatively large CI widths. On the contrary, ALEE-based CIs achieve target coverage while keeping the width of CIs relatively small.
For AR(1) model, we display the setting s_0 = e^3 n / loglog(n) in Figure <ref>, which is based on the fact that s_n grows at the rate at least n for any θ^* ∈ (-1, 1] <cit.>. For the context bandit example, we summarize the empirical coverage probability and the logarithm of the volume of the confidence regions in Table <ref>, along with corresponding standard deviations. Similar conclusions can be drawn.
§ DISCUSSION
In this paper, we study the parameter estimation problem in adaptive linear model. We propose to use ALEE (adaptive linear estimation equations) to obtain point and interval estimates. Without requiring data to present stability on the covariance matrix, our approach considers construction of stable weights so that asymptotic normality is guaranteed based on martingale central limit theorem. We discuss the construction of weights for the bandit problem and also the contextual bandit problem in details. In corresponding sections, we discuss how to construct weights so that ALEE is more efficient.
plain
tocsectionAppendix
PART:
Appendix
§ PROOF
In Theorem <ref>, Corollary <ref> and Remark <ref>, we deal with an arm with index k ∈ [K]. To simply notations, we drop the subscript k in s_t,k, w_t,k, x_t,k, and θ^*_k throughout the proof, and use s_t, w_t, x_t, and θ^*, respectively.
§.§ Proof of Theorem <ref>
Condition (<ref>) serves as an important role in proving (<ref>). Therefore, we start our proof by verifying the condition (<ref>). Since function f is a positive decreasing function and satisfies properties in the integral condition (<ref>), we have
max_1≤ t≤ n f^2(s_t/s_0)x_t^2/s_0≤ f^2(1) 1/s_0 and max_1≤ t≤ n( 1 - f(s_t/s_0)/f(s_t-1/s_0)) ≤ 1 - f(1+ 1/s_0)/f(1).
Thus, by assuming 1/s_0 = o_p(1) and s_0/s_n = o_p(1), condition (<ref>) follows directly from (<ref>).
By the construction of ALEE estimate, we have
{∑_t = 1^n w_t x_t}· ( - θ^*) = ∑_t = 1^n w_t _t.
Note that
∑_t=1^n w_t x_t =√(s_0)∑_t=1^n f(s_t/s_0)x_t^2/s_0 = √(s_0)∫_1^s_n/s_0 f(x)dx ·∑_t≤ n f(s_t/s_0)x_t^2 /s_0/∫_1^s_n/s_0 f(x)dx.
By the mean value theorem, we have that for t∈ [n], ξ_t ∈ [s_t-1, s_t]
∫_s_t-1/s_0^s_t/s_0 f(x) dx = f(ξ_t/s_0) x_t^2/s_0.
Therefore, we have
∑_t≤ n f(s_t/s_0)x_t^2/s_0/∫_1^s_n/s_0 f(x)dx = 1 + ∑_t≤ n(f(s_t/s_0)/f(ξ_t/s_0)-1) f(ξ_t/s_0) x_t^2/s_0 /∑_t≤ n f(ξ_t/s_0) x_t^2/s_0 _Δ= R.
Observe that
|R| ≤∑_t≤ n|f(s_t/s_0)/f(ξ_t/s_0)-1| f(ξ_t/s_0) x_t^2/s_0 /∑_t≤ n f(ξ_t/s_0) x_t^2/s_0
≤∑_t≤ n|f(s_t/s_0)/f(s_t-1/s_0)-1| f(ξ_t/s_0) x_t^2/s_0 /∑_t≤ n f(ξ_t/s_0) x_t^2/s_0
≤max_t≤ n(1 - f(s_t/s_0)/f(s_t-1/s_0)) (i)= o_p(1).
Equality (i) follows from equation (<ref>). Consequently, applying Slutsky's theorem yields
∑_i =1^n w_t x_t/√(s_0)∫_1^s_n/s_0 f(x)dx 1.
Similarly, we can derive
∑_t=1^n w_t^2 = ∑_t=1^n f^2(s_t/s_0)x_t^2/s_0
= (1+o_p(1))∫_1^s_n/s_0 f^2(x)dx = 1+o_p(1).
Knowing max_1≤ t≤ n w_t^2 = max_1≤ t≤ n f^2(s_t/s_0) x_t^2/s_0 = o_p(1), which is a consequence of equation (<ref>), martingale central limit theorem together with an application of Slutsky's theorem yields
( - θ^*) ·∫_1^s_n/s_0√(s_0)/σ f(x)dx (0, 1).
Lastly, we recall that
1/σ (∑_1 ≤ t ≤ n w_t^2)^1/2(∑_ t =1^n w_t x_t) · (-θ^*)= 1/σ (∑_1 ≤ t ≤ n w_t^2)^1/2∑_t = 1^n w_t _t.
Therefore, equation (<ref>) follows from martingale central limit theorem and Slutsky's theorem.
Equation (<ref>) sheds light on the asymptotic variance of the ALEE estimator, thereby aiding in the selection of a suitable function f to improve the efficiency of ALEE estimator. On the other hand, equation (<ref>) offers a practical approach to obtaining an asymptotically precise confidence interval.
Condition (<ref>) is a general requirement that governs equation (<ref>), and is not specific to bandit problems. However, the difficulty in verifying (<ref>) can vary depending on the problem at hand.
§.§ Proof of Corollary <ref> and Remark <ref>
Corollary <ref> is a direct consequence of Theorem 1 in <cit.>. In this section, we provide a proof of Remark <ref>. By considering λ_0 = 1 in Corollary <ref>, we have with probability at least 1 - δ
|∑_t = 1^n w_t x_t|· | - θ^*| ≤√( (1 + ∑_t = 1^n w_t^2) ·log( 1 + ∑_t = 1^n w_t^2 /δ^2 ) ).
By the construction of the weights in Corollary <ref>, we have
∑_t = 1^n w_t^2 = ∑_t = 1^n f^2(s_t/s_0) x_t^2/s_0≤∫_1^∞ f^2(x) dx = 1.
Therefore, to complete the proof, it suffices to characterize a lower bound for ∑_1≤ t≤ n w_tx_t. By definition, we have
∑_t = 1^n w_t x_t = ∑_t = 1^n f(s_t/s_0)x_t^2/√(s_0)
(i)=∑_t=1^n x_t^2/(s_t log(e^2 s_t/s_0))^1/2loglog(e^2s_t/s_0)
≥1/(2 + log(s_n/s_0))^1/2log(2 + log(s_n/s_0))∑_t=1^n x_t^2/√(s_n)
(ii)≥1/(2 + log(s_n/s_0))^1/2log(2 + log(s_n/s_0))· 2(√(s_n) - √(s_0))√(s_0/1 + s_0)
(iii)≥1/(2 + log(s_n/s_0))^1/2log(2 + log(s_n/s_0))·√(2) (√(s_n) - √(s_0)).
In equation (i), we plug in the expression of function f and hence √(s_0) cancels out. Since x_t is either 0 or 1, inequality (ii) follows from the integration of the function h(x) = 1/√(x). Inequality (iii) follows from s_0 > 1.
Putting things together, we have
| - θ^*| ≤√(2 log(2/δ^2))/∑_1≤ t≤ n w_t x_t
≤√(log( 2/δ^2 ) ) √(2 + log(s_n/s_0))log{2 + log(s_n/s_0)}/√(s_n) - √(s_0).
This completes our proof of Remark <ref>.
§.§ Proof of Theorem <ref>
Note that for any t ≥ 1, we have
_t_≤ 1 and _t = _t-1 - _t-1_t_t^⊤_t-1/(1+_t^⊤_t-1_t).
The second part of equation (<ref>) follows from the Sherman–Morrison formula. Let _t = _t _t and we adopt the notation _0 = _d. By multiplying _t on the right hand side of _t, we have
_t _t = _t-1_t - _t-1_t_t^⊤_t-1_t /(1+_t^⊤_t-1_t)
= _t-1_t (1 - _t^⊤_t-1_t/1+_t^⊤_t-1_t)= _t-1_t /1+_t^⊤_t-1_t.
Therefore, following the definition of _t, we have
(1+_t^⊤_t-1_t)_t =_t-1_t. Consequently,
∑_t=1^n(1+_t^⊤_t-1_t) _t _t^⊤
= ∑_t=1^n _t-1(_t^-1-^-1_t-1)_t
= _d - _n.
By recognizing _t = √(1 + _t^⊤_t-1_t)·_t, we come to
∑_t = 1^n _t _t^⊤
= ∑_t=1^n _t-1(_t^-1-^-1_t-1)_t
= _d - _n.
What remains now is to verify conditions in (<ref>). Notably, assumption λ_min(_n^-1) p⟶∞ implies
∑_t = 1^n _t _t^⊤_d.
Since λ_min(_0) p⟶∞, _t_≤ 1 and
_t_2 ≤ 1, we can show
max_1≤ t≤ n_t^⊤_t_t = max_1≤ t≤ n_t^⊤_t - 1^-1/2_t _t - 1^-1/2_t = o_p(1).
Besides, equation (<ref>) together with equation (<ref>) implies
max_1≤ t≤ n_t^⊤_t-1_t = max_1≤ t≤ n_t^⊤_t _t /1 + _t^⊤_t _t = o_p(1).
Thus, it follows that
max_1≤ t≤ n_t _2 = max_1≤ t≤ n√(1 + _t^⊤_t-1_t)·_t _t _2
≤max_1≤ t≤ n√(1 + _t^⊤_t-1_t)·_t_·_t_2 = o_p(1).
Combining equations (<ref>) and (<ref>) yields (<ref>). Hence we complete the proof by applying Proposition <ref>.
The proof of Lemma <ref> can be found in the proof of Theorem <ref>.
§ SIMULATION
In this section, we provide details of our simulation experiments for OLS, W-decorrelation <cit.> and Concentration based methods <cit.>.
§.§ Simulation details
In our experiments, we utilize equation (<ref>) to obtain an estimate σ^2 of σ^2. When we create confidence regions or confidence intervals, we plug in σ instead of using critical values of F-distribution. The reason for this is that when data are collected adaptively, some independence structure are no longer valid.
OLS:
When data are i.i.d, the least squares estimator satisfies the following condition
1/σ^2(_LS - ^*)^⊤_n (_LS - ^*) χ^2_d.
Therefore, we consider 1 - α confidence region to be
_LS = {∈^d : 1/σ^2(_LS - )^⊤_n (_LS - ) ≤χ^2_d, 1 - α}.
W-decorrelation:
The W-decorrelation method follows from Algorithm 1 in <cit.>. Specifically, the estimator takes the form
= + ∑_t =1^n _t( y_t - _t^⊤).
Given a parameter λ, weights {_t}_1 ≤ t≤ n are set as follows
_t = (_d - ∑_i = 1^t - 1_t _t^⊤) _t/(λ + _t ^2_2).
In order to set λ appropriately, we run the bandit algorithm or time series with N replications and record the corresponding minimum eigenvalues {(_n^(1)),…, (_n^(N))}. We choose λ to be the 0.1-quantile of {(_n^(1)),…, (_n^(N))}. This follows from the procedure in simulations of <cit.>. Therefore, to obtain a 1 - α confidence region for ^*, we have
_W = {∈^d : 1/σ^2(_W - )^⊤^⊤ (_W - ) ≤χ^2_d, 1 - α},
where = (_1, …, _n)^⊤.
Concentration based on self-normalized martingales:
We consider <cit.> for two-armed bandit problem and AR(1) model. For contextual bandits, we apply <cit.>. Applying concentration bounds requires a sub-Gaussian parameter, which we use σ as an estimate.
* For one dimensional examples, we have with probability at least 1 - δ,
^⊤ (_LS - ^*) ≤√(^⊤_n^-1· f_n, δ),
where f_n, δ = 2(1 + 1/log(n))log(1/δ) + c· d log(dlog(n)) and we set c = 1. Here d = 2 in the two-armed bandit example and d =1 in the AR(1) example.
* In Theorem 2 from <cit.>, we set λ = 1 and S = 1. Therefore, we have
_con = {∈^d: (_r - )^⊤ (_d + _n)(_r - ) ≤( σ√((_d + _n)/α^2)+ 1 )^2 },
where _r = (_n^⊤_n + _d )^-1_n^⊤_n and _n = (y_1, …, y_n)^⊤.
|
http://arxiv.org/abs/2307.03950v1 | 20230708105034 | Mod 2 instanton homology and 4-manifolds with boundary | [
"Kim A. Frøyshov"
] | math.GT | [
"math.GT",
"math.DG"
] |
Mod 2 instanton homology and 4-manifolds
with boundary
Kim A. Frøyshov
======================================================
Using instanton homology with coefficients in /2 we construct a
homomorphism from the homology cobordism group to the integers
which is not a rational linear combination of
the instanton h–invariant and the Heegaard Floer
correction term d. If an oriented
homology 3–sphere Y bounds a smooth, compact,
negative definite 4–manifold without 2–torsion in its homology
then (Y)≥0, with strict inequality if the intersection form
is non-standard.
empty
plain
§ INTRODUCTION
This paper will introduce an integer invariant (Y) of oriented
integral homology 3–spheres Y. This invariant is defined in terms of
instanton cohomology with coefficients in /2 and may be regarded as a
mod 2 analogue of the h–invariant <cit.>, which was defined with
rational coefficients. Both invariants grew out of efforts to extend
Donaldson's diagonalization theorem <cit.> to 4–manifolds with
boundary.
We will use the instanton (co)homology originally introduced by Floer
<cit.>, an exposition of which can be found in <cit.>. With
coefficients in /2, instanton cohomology I(Y;/2) comes equipped with
some extra structure, namely two “cup products” u_2 and u_3 of degrees
2 and 3, respectively, and homomorphisms
I^4(Y;/2)_0⟶/2_0'⟶ I^1(Y;/2)
counting index 1 trajectories running into and from the trivial flat
2 connection, respectively.
This extra structure enters in the definition of the
invariant q_2, which is given in Section <ref>.
Reversing the rôles of the cup products u_2,u_3 in the definition
yields another invariant q_3. However, the present paper will focus on
.
It would be interesting to try to express the invariants h,q_2,q_3 in terms of
the equivariant instanton homology groups recently introduced by Miller Eismeier
<cit.>.
We now describe some properties and applications of .
For any oriented homology 3–spheres Y_0 and Y_1 one has
(Y_0#Y_1)=(Y_0)+(Y_1).
The proof of additivity is not quite straightforward and occupies more
than half the paper.
thm[Monotonicity]
Let W be a smooth compact oriented 4-manifold with boundary
W=(-Y_0)∪ Y_1, where Y_0 and Y_1 are oriented homology
3–spheres. Suppose the intersection form of W is negative
definite and H^2(W;) contains no element of order 4. Then
(Y_0)≤(Y_1).
If the manifold W in the theorem actually satisfies b_2(W)=0 then one can
apply the theorem to -W as well so as to obtain (Y_0)=(Y_1).
This shows that descends to a group homomorphism
→, where is the integral homology cobordism group.
We observe that the properties of described so
far also hold for the instanton
h–invariant, the negative of its monopole analogue <cit.>, and
the Heegaard Floer correction term d. Note that the latter three
invariants are monotone
with respect to any negative definite cobordism, without any assumption on the
torsion in the cohomology.
thm[Lower bounds]
Let X be a smooth compact oriented 4-manifold whose boundary
is a homology sphere Y. Suppose the intersection form of X is negative
definite and H^2(X;) contains no 2-torsion. Let
J_X:=H^2(X;)/torsion,
and let w be an element of J_X which is not divisible by 2.
Let k be the minimal square norm (with
respect to the intersection form) of any element
of w+2J_X. Let n be the number of elements of w+2J_X of square norm k.
If k≥2 and n/2 is odd then
equation*
(Y)≥k-1.
By an integral lattice we mean a free abelian group of finite rank
equipped with
a symmetric bilinear integer-valued form. Such a lattice is called
odd if it contains an element of odd
square; otherwise it is called even.
cor
Let X be as in Theorem <ref>.
Let J_X⊂ J_X be the orthogonal complement of the sublattice of J_X
spanned by all vectors of square -1, so that J_X is an orthogonal sum
J_X=m-1⊕ J_X
for some non-negative integer m.
description
(i)If J_X≠0, i.e. if J_X is not diagonal, then (Y)≥1.
(ii)If J_X is odd then (Y)≥2.
To deduce (i) from the theorem, take C:=v+2J_X where v is any non-trivial
element of
J_X of minimal square norm. To prove (ii), choose a v with minimal odd
square norm.
thm
Let Y be the result of (-1) surgery on a knot K in S^3. If changing
n^- negative crossings in a diagram for K produces a positive knot then
0≤(Y)≤ n^-.
For k≥2 the Brieskorn sphere (2,2k-1,4k-3) is the boundary of a
plumbing manifold with intersection form -_4k (see
Section <ref>), and it is also
the result of (-1)
surgery on the (2,2k-1) torus knot. In these examples
the upper bound on given by
Theorem <ref> turns out to coincide with the lower bound
provided by Theorem <ref>, and one obtains the following.
For k≥2 one has
((2,2k-1,4k-3))=k-1.
On the other hand, by <cit.> one has
h((2,2k-1,4k-3))=⌊ k/2⌋,
and in these examples the correction term d satisfies d=h/2, as follows
from <cit.>. This shows:
The invariant is not a rational linear combination of the
h–invariant and the correction term d.□
In particular,
h,:→
are linearly independent homomorphisms, and the same is true for d,.
It follows from this that has a ^2 summand. However, much more is
true: Dai, Hom, Stoffregen, and Truong <cit.> proved that
has a ^∞ summand. Their proof uses involutive Heegaard Floer homology.
The monotonicity of the invariants h,d, leads to the following result.
Let Y by an oriented homology 3-sphere. If
min(h(Y),d(Y))<0<(Y)
then Y does not bound any definite 4-manifold without elements of order 4
in its second cohomology.
An explicit example to which the theorem applies is 2(2,5,9)#-3(2,3,5).
A related result was obtained by Nozaki, Sato, and Taniguchi <cit.>.
Using a filtered version of instanton homology they proved that certain linear
combinations of Brieskorn homology 3–spheres do not bound any definite
4–manifold.
If an oriented homology 3-sphere Y satisfies
h(Y)≤0<(Y)
then I^5(Y;) contains 2–torsion, hence Y is not homology cobordant
to any Brieskorn sphere (p,q,r).
We conclude this introduction with two sample applications of the invariant
.
Let X be a smooth compact oriented connected 4-manifold whose boundary
is the Poincaré sphere (2,3,5).
Suppose the intersection form of X is negative definite.
Let J_X be as in Corollary <ref>.
(i) If J_X is even then J_X=0 or -E_8.
(ii) If J_X is odd then H^2(X;)
contains an element of order 4.
Earlier versions of this result were obtained using instanton homology
in <cit.> (assuming X
is simply-connected) and in <cit.> (assuming X has no 2–torsion
in its homology).
There are up to isomorphism two even, positive
definite, unimodular forms of rank 16, namely 2E_8 and _16.
If Z denotes the negative definite E_8–manifold then the boundary
connected sum Z#_Z has intersection form -2E_8.
It is then natural to ask whether (2,3,5)#(2,3,5) also bounds
-_16.
There appears to be no obstruction to this coming from
the correction term.
Let X be a smooth compact oriented 4-manifold whose boundary
is (2,3,5)#(2,3,5). Suppose the intersection form of X is negative
definite and H^2(X;) contains no 2–torsion. If J_X is even then
J_X=0, -E_8, or -2E_8.
Further results on the definite forms bounded by a given homology 3–sphere
were obtained by Scaduto <cit.>.
Some of the results of this paper were announced in various talks several years
ago. The author apologizes for the long delay in publishing the results.
§ THE BASE-POINT FIBRATION
Let X be a connected smooth n–manifold, possibly with boundary, and
P→ X a principal 3 bundle. Fix p>n and let A be a p1
connection in P. This means that A differs from a smooth connection by a
1–form which lies locally in L^p_1. Let _A be the group of
p2 automorphisms (or gauge transformations) of P that preserve A.
The connection A is called
* irreducible if _A={1}, otherwise reducible;
* Abelian if _A≈1;
* twisted reducible if _A≈/2.
Note that a non-flat reducible connection in P is either
Abelian or twisted reducible.
Recall that automorphisms of P can be regarded as sections of the bundle
P3×3 of Lie groups, where 3 acts on itself by
conjugation. An automorphism is called even if it lifts to a
section of P3×2. A connection A in P is called
even-irreducible if its stabilizer _A contains no non-trivial
even automorpisms, otherwise A is called even-reducible.
A non-flat connection is even-reducible if and only if it is Abelian.
Now suppose X is compact and let be the space of
all L^p_1 connections in P. The affine Banach space is acted upon
by the Banach Lie group consisting of all L^p_2 automorphisms of P.
Let ^*⊂ be subset of irreducible connections and define
=/. The irreducible part ^*⊂ is a Banach manifold,
and it admits smooth partitions of unity provided p>n is an even integer,
which we assume from now on. Instead of ^* we often write ^*(P),
or ^*(X) if the bundle P is trivial. Similarly for , etc.
Let ^* be the space of all even-irreducible
L^p_1 connections in P.
Let be the group of even p2
automorphisms of P. As explained in <cit.>, there is an exact
sequence
1→→→ H^1(X;/2)→0.
The quotient ^*=^*/ is a Banach manifold.
Let X be a topological space.
(i) A class v∈ H^2(X;/2) is called admissible if
v has a non-trivial pairing with a class in H_2(X;), or equivalently,
if there exist a closed oriented 2–manifold and a continuous map
f:→ X such that f^*v≠0. If and f can be chosen such that,
in addition,
f^*a=0 for every a∈ H^1(X;/2),
then v is called
strongly admissible.
(ii) An 3 bundle E→ X is called
(strongly) admissible if
the Stiefel-Whitney class w_2(E) is (strongly) admissible.
For example, a finite sum v=∑_ia_i∪ b_i with
a_i,b_i∈ H^1(X;/2) is never strongly admissible.
Let X be a compact, oriented, connected smooth 4–manifold with base-point
x∈ X. Let P→ X be an 3 bundle.
(i) If P is admissible then the 3 base-point fibration over
^*(P) lifts to a 2 bundle.
(ii) If P is strongly admissible then the 3 base-point
fibration over ^*(P) lifts to a 2 bundle.
We spell out the proof of (ii), the proof of (i) being similar
(or easier). Let be a closed oriented surface and f:→ X
a continuous map such that f^*P is non-trivial and eqn:fa0 holds.
We can clearly arrange that is
connected. Because X≥2 it follows
from <cit.> that f can be uniformly approximated
by (smooth) immersions f_0. Moreover, if the approximation is sufficiently
good then f_0 will be homotopic to f. Therefore, we may assume f is an
immersion.
Since base-point fibrations associated to different base-points in
X are isomorphic we may also assume that x lies in the image of f,
say x=f(z).
We adapt the proof of <cit.>, see also
<cit.>. Let →^*:=^*(P) be the
oriented Euclidean 3–plane bundle associated to the
base-point fibration. We must find an Hermitian 2-plane bundle
such that
is isomorphic to the bundle ^0_
of trace-free
skew-Hermitian endomorphisms of .
Let E→ X be the standard 3–plane bundle associated to P.
Choose an Hermitian 2–plane bundle W→ together with an isomorphism
ϕ:^0_W≈→ f^*E, and fix a connection A_,det
in (W). Any (orthogonal)
connection A in E induces a connection in f^*E
which in turn induces a connection A_ in W with central part
A_,det. Choose a spin structure on and let S^*± be
the corresponding spin bundles over . For any
connection A in E let
_,A:S^+⊗ W→ S^-⊗ W
be the Dirac operator
coupled to A_. If A is an L^p_1 connection, p>4, and A_0 is a
smooth connection in E then A-A_0 is continuous, hence
_,A-_,A_0 defines a bounded operator L^2→ L^2 and
therefore a compact operator L^2_1→ L^2. Let
:=(_,W)
be the determinant line bundle over (E)
associated to the family of Fredholm operators
_,A:L^2_1→ L^2.
Then automorphism (-1) of W acts on with weight equal to the
numerical index of _,A. According to Atiyah-Singer's theorem
<cit.> this index is
(_,A)={ch(W)Â()}·[]=c_1(W)·[].
But the mod 2 reduction of c_1(W) equals f^*(w_2(E)),
which is non-zero by assumption, so the index is odd.
The assumption eqn:fa0 means that every automorphism of E
pulls back to an
even automorphism of f^*E. Moreover, every even automorphism of
f^*E≈^0_W
lifts to an automorphism of W of determinant 1, the lift being well-defined
up to an overall sign since is connected. Because the automorphism
(-1) of W acts trivially on ⊗ W_z this yields an action of
(E) on ⊗ W_z. The quotient
:=(⊗ W_z)/(E)
is a complex 2-plane bundle over ^*(E).
We claim that there is an Hermitian
metric on such that on every fibre _A there is an Hermitian
metric for which the projection _A⊗ W_z→_[A] is an
isometry. To see this, let S⊂(E) be any local slice for the action
of (E), so that S projects diffeomorphically onto an open subset
U⊂^*(E). Choose any Hermitian metric on |_S and let
g_U be the induced Hermitian metric on _U≈(⊗ W_z)|_S.
Now cover ^*(E) by such open sets U and patch together the corresponding
metrics g_U to obtain the desired metric on .
Given any Hermitian metric on a fibre _A there are linear isometries
^0__A⊗ W_z≈→^0_W_z≈→ E_x,
where the first isometry is canonical and independent of the chosen metric
on _A and the second one is given by ϕ. This yields an isomorphism
^0_≈→.□
§ MODULI SPACES
Let P→ Y be a principal 3 bundle, where Y is a closed oriented
3–manifold. The Chern-Simons functional
:(P)→/
is determined up to an additive constant by the property that if A is any
connection in the pull-back of P to the band [0,1]× Y then
(A_1)-(A_0)=∫_[t_0,t_1]× Y F_A∧ F_A,
where A_t denotes the restriction of A to the slice
{t}× Y, and ·∧· is formed by combining the wedge
product on forms with minus the Killing form on the Lie algebra of 3.
If P=Y×3 then we normalize so that its value on
the product connection θ is zero. If v is any automorphism of P
then for any connection B in P one has
(v(B))-(B)=-1/2(v),
where the degree (v) is defined to be the intersection number of v
with the image of the constant section 1.
Equation eqn:csdeg,
up to an overall sign, was stated without proof in <cit.>.
A proof of eqn:csdeg
can be obtained by first observing that the left-hand side of the
equation is independent of B, and both sides define homomorphisms from the
automorphism group of P into . Replacing v by v^2 it then
only remains to verify the equation
for even gauge transformations, which is easy.
If v lifts to a section v of P3×2 then
(v)=2( v),
where ( v) is the intersection number of v with the image
of the constant section 1. In particular, every even automorphism of
P has even degree.
The critical points of the Chern-Simons functional
are the flat connections in P. In practice, we will add a small holonomy
perturbation to as in <cit.>, but this will usually not be
reflected in our notation.
Let (P) denote the space of all critical points of modulo
even automorphisms of P. The even-reducible part of (P) is denoted
by ^*(P). If Y is an (integral) homology sphere then P is
necessarily trivial and we write (Y)=(P).
Now let X be an oriented Riemannian 4–manifold with tubular ends
[0,∞)× Y_i, i=0,…,r, such that the complement of
:=⋃_i [0,∞)× Y_i
is precompact. We review the standard set-up of moduli spaces of anti-self-dual
connections in a principal 3 bundle Q→ X, see <cit.>. Given a
flat connection ρ in Q|_, we define the moduli space
M(X,Q;ρ) as follows. Choose a smooth connection A_0 in Q which agrees
with ρ outside a compact subset of X.
We use the connection A_0 to define Sobolev norms on forms with values
in the adoint bundle _Q of Lie algebras associated to Q.
Fix an even integer p>4.
Let =(Q) be the space of connections in Q of the form A_0+a with
a∈ pw1, where w is a small, positive exponential weight as in
<cit.>. There is a smooth action on by the Banach
Lie group consisting of all p2 gauge transformation u
of Q such that ∇_A_0u· u∈ pw1.
Let :=/ and let M(X,Q;ρ) be the subset of consisting
of gauge equivalence classes of connections A satisfying F^+_A=0.
In practice, we will often add a small holonomy perturbation to the
ASD equation, but this will usually be suppressed from notation.
We observe that the value of the Chern-Simons integral
(Q,ρ):=-1/8π^2∫_X F_A∧ F_A
is the same for all A∈. (If X is closed then the right hand side
of Equation eqn:ka-int equals the value of -p_1(Q) on the
fundamental class of X. This normalization will be convenient in
Section <ref>.)
If u is an automorphism of Q|_ then
from Equations eqn:cs-int-band and eqn:csdeg we deduce that
(Q,u(ρ))-(Q,ρ)=2∑_i(u_i),
where u_i is the restriction of uto the slice {0}× Y_i.
Similarly, for the expected dimensions we have
M(X,Q;u(ρ))-M(X,Q;ρ)=4∑_i(u_i).
On the other hand, if u extends to a smooth automorphism of all of Q
then ∑(u_i)=0,
and the converse holds at least if u is even.
Given the reference connection A_0, we can identify the restriction of the
bundle Q to an end [0,∞)× Y_i with the pull-back of a bundle
P_i→ Y_i.
Let _i∈(P_i) be the element obtained by restricting
ρ to any slice {t}× Y_i where t>0. We will usually
assume that each _i is non-degenerate.
The above remarks show that
the moduli space M(X,Q;ρ) can be specified by the
r–tuple =(_1,…,_r) together with one extra piece of data:
Either the Chern-Simons value =(Q,ρ) or the expected dimension d
of M(X,Q;ρ). We denote such a moduli space by
M_(X,Q;) or M_(d)(X,Q;).
Note that for given there is exactly one moduli space M_(d)(X,Q;)
with 0≤ d≤7; this moduli space will just be denoted by M(X,Q;).
For any anti-self-dual connection A over X, the energy _A(Z)
of A over a measurable subset Z⊂ X is defined by
_A(Z):=-∫_Z F_A∧ F_A
=∫_Z|F_A|^2.
If X= and Z=I× Y for some interval I then we write
_A(I) instead of _A(I× Y).
§ SPACES OF LINEARLY DEPENDENT VECTORS
This section provides background for the definition of the cup product u_2
as well as results which will be used in the proof of
Proposition <ref>.
For any finite-dimensional real vector space V set
L(V):={(v,w)∈ V⊕ Vv,w are linearly dependent in V}.
Then L(V) is closed in V⊕ V and
L^*(V):=L(V)∖{(0,0)}
is a smooth submanifold of V⊕ V of codimension n-1, where n is the
dimension of V.
As a short-hand notation we will often write v∧ w=0 to express that
v,w are linearly dependent.
If B is any smooth Banach manifold and π:E→ B a smooth real vector
bundle of finite rank let L^*(E)→ B be the associated smooth fibre bundle
whose fibre over a point x∈ B is L^*(E_x), where E_x=π(x).
Similarly, let L(E)→ B be the topological fibre bundle with fibre
L(E_x) over x.
Let ℓ→ S^1 be the non-trivial real line bundle such that for
z∈ S^1 the fibre of ℓ over z^2 is the line z in . Let
E:=E× S^1 and ℓ:=B×ℓ be the pull-backs of
the bundles E and ℓ, respectively, to B× S^1. We identify
R^2=, so that (a,b)=a+bi for real numbers a,b.
Let s=(s_1,s_2) be a nowhere vanishing smooth section of E⊕ E.
Let be the section of E⊗ℓ such that for any
p∈ B and z=(x_1,x_2)∈ S^1 one has
(p,z^2)=(x_1s_1(p)+x_2s_2(p))⊗ z.
(i) The projection B× S^1→ B maps the zero-set of
bijectively onto the locus in B where s_1,s_2 are linearly dependent.
(ii) A zero (p,w) of is regular if and
only if s is transverse to L^*(E) at p.
The proof of (i) is left as an exercise. To prove (ii) we may assume
E is trivial, so that s_j is represented by a smooth map f_j:B→ V
for some finite-dimensional real vector space V. We observe that
for any u_1,u_2∈ V and z=(x_1,x_2)∈ S^1 one has
(u_1,u_2)=(x_1u_1+x_2u_2)⊗ z+(x_1u_2-x_2u_1)⊗ iz
as elements of V⊕ V=V⊗_.
It follows that the tangent space of L^*(V) at a point (v_1,v_2)
which satisfies x_1v_1+x_2v_2=0 is given by
T_(v_1,v_2)L^*(V)=V⊗ iz+(x_1v_2-x_2v_1)⊗ z.
Now suppose (p,w) is a zero of and s(p)=(v_1,v_2), z^2=w.
Then eqn:tlv holds. Let L_j:T_pB→ V be the derivative of f_j at p.
Then (p,w) is
a regular zero of precisely when V is spanned by the vector
x_1v_2-x_2v_1 together with the image of the map x_1L_2+x_2L_2.
From eqn:u1u2 we see that
the latter condition is also equivalent to s being transverse to
L^*(V) at p.□
We record here a description of the sections of
E⊗ℓ which
will be used in the proof of Proposition <ref> below.
Let _a( E) denote
the space of all sections s∈( E) such that
s(p,-z)=-s(p,z)
for all (p,z)∈ B× S^1.
Then there is a canonical real linear isomorphism
( E⊗ℓ)→_a( E), ↦
characterized by the fact that
(p,z^2)=(p,z)⊗ z
for all (p,z)∈ B× S^1.□
If B is finite-dimensional, the bundle E has rank 3,
and s is a generic smooth
section of E⊕ E then s(L(E))
represents the Poincaré dual of the second Stiefel-Whitney
class w_2(E) in the following sense. Given any class a∈ H_2(B;/2),
represented by a generic smooth map f:→ B
where is a closed surface, then
a,w_2(E)≡#(s∘ f)(L(E))2.
§ “GENERIC” SECTIONS
Let B be a smooth Banach manifold and π:E→ B a smooth
real vector bundle of finite rank. If B is infinite-dimensional then we
do not define a topology on the space (E) of (smooth) sections of
E, so it makes no sense to speak about residual subsets of (E).
Instead, we will say
a subset Z⊂(E) is “residual” (in quotation marks) if
there is a finite-dimensional subspace ⊂(E) such that
for every finite-dimensional subspace '⊂(E) containing
and every section s of E there is a residual subset
⊂' such that s+⊂ Z. Note that “residual” subsets
are non-empty, and any finite intersection of “residual” subsets is again
“residual”. We will say a given property
holds for a “generic” section of E if it holds for every section belonging
to a “residual” subset of (E).
We indicate one way of constructing such subspaces .
Suppose B supports smooth bump functions, i.e. for any point
x∈ B and any neighbourhood U of x there exists a smooth function
c:B→ such that c(x)≠0 and c=0 outside U. Given a compact subset
K of B, one can easily construct a finite-dimensional subspace
⊂(E) such that, for every x∈ K, the evaluation map
→ E_x, s↦ s(x)
is surjective. Therefore, if we are given a collection of smoooth maps
f_k:M_k→ B, k=1,2,…, where each M_k is a
finite-dimensional manifold and the image of each f_k is
contained in K then, for a “generic” section s of E, the map
s∘ f_k:M_k→ E
is transverse to the zero-section in E for each k.
§ INSTANTON COHOMOLOGY AND CUP PRODUCTS
In this section we will work with 3 connections modulo
even gauge transformation (see Section <ref>),
although this will not be
reflected in our notation.
In particular, we write ^* instead of ^*. This notational
convention applies only to this section.
(In Subsection <ref>, which only deals with
homology spheres, the convention is irrelevant.)
§.§ Instanton cohomology
Let Y be a closed oriented connected 3-manifold and P→ Y an
3 bundle. If Y is not an homology sphere then we assume P is
admissible. For any ,β∈(P) let M(,β) denote the
moduli space of instantons in the bundle × P→ with flat
limits at -∞ and β at ∞ and with expected dimension
in the interval [0,7]. Let
(,β)=M(,β)/,
where acts by translation. If ,β are irreducible then
the relative index
(,β)∈/8 is defined by
(,β)= M(,β)8.
For any commutative ring R with unit we denote by I(P;R) the relatively
/8 graded
instanton cohomology with coefficients in R as defined in
<cit.>. Recall that this is the cohomology of a cochain complex
(C(P;R),d) where C(P;R) is the free R–module generated by ^*(P)
and the differential d is defined by
d=∑_β#(,β)·β.
Here, # means the number of points counted with sign,
and the sum is taken over all β∈^*(P) satisfying
(,β)=1.
If P is admissible then ^*(P)=(P). If instead Y is an homology
sphere then (P)=(Y) contains exactly one reducible point θ,
represented by the trivial connection.
The presence of the trivial connection provides C(P;R)=C(Y;R) with an absolute
/8 grading defined by
()= M(θ,)8.
The trivial connection also gives rise to homomorphisms
C^4(Y;R)→ R'→ C^1(Y;R)
defined on generators by
=#(,θ), 1=∑_β#(θ,β)·β,
where we sum over all β∈^*(Y) of index 1.
These homomorphisms satisfy d=0 and d'=0 and therefore define
I^4(Y;R)_0→ R_0'→ I^1(Y;R).
We conclude this subsection with some notation for energy. If A is any
ASD connection in the bundle Q:=× P and I is any interval then
we write _A(I) instead of _A(I× Y). Moreover, if
,β∈(Y) and the moduli space M(,β) is expressed as
M(,Q;ρ) in the notation of Section <ref> then
we define
(,β):=1/4(Q,ρ),
which equals the total energy of any element of M(,β). (Note,
however, that M(,β) may be empty.)
§.§ Cup products
We continue the discussion of the previous subsection, assuming P is
admissible unless Y is an homology sphere.
In most of this paper
the coefficient ring R will be /2, and we write
I(P):=I(P;/2).
For j=2,3 we will define a degree j endomorphism
u_j:I^*(P)→ I^*+j(P).
Insofar as the Floer cohomology is some kind of Morse
cohomology of ^*(P), one may think of u_j as cup product with
the jth Stiefel-Whitney class of the base-point fibration over ^*(P).
The map u_j will be
induced by an endomorphism
v_j:C^*(P)→ C^*+j(P)
which we now define. For any t∈ set
t:=[t-1,t+1]× Y.
Let P_0=[-1,1]× P denote the pull-back of the bundle P to 0.
For any ,β∈(P) and any irreducible point ∈ M(,β)
let
[t]:=|_Y[t]∈^*(P_0)
denote the restriction of to the band Y[t]. (The fact that [t]
is irreducible follows from
Proposition prop:unique-continuation-cylinder.)
Choose a base-point y_0∈ Y,
and let
→^*(P_0)
be the natural
real vector bundle of rank 3 associated to the base-point (0,y_0)∈0.
To define v_3, choose a “generic” smooth section s_1 of .
For any ,β∈^*(P)
with (β)-()≡38 the matrix coefficient
v_3,β is defined to be
equation
v_3,β:=#{∈M(,β)s_1([0])=0},
where # means the number of points counted modulo 2.
To define v_2, let s_2,s_3 be a pair of smooth sections of which
define a “generic” section of ⊕.
For any ,β∈^*(P)
with (β)-()≡28 the matrix coefficient
v_2,β is defined to be
equation
v_2,β:=
#{∈M(,β)s_2,s_3 are linearly dependent at [0]}.
Note that, for dimensional reasons, s_2 and s_3 cannot simultaneously
vanish at [0] for any ∈ M(,β).
prop
For j=2,3 one has
dv_j=v_jd
as homomorphisms C^*(P)→ C^*+j+1(P).
To prove this for j=2, let ,β∈^*(P) with
(β)-()≡38.
The number of ends of the 1-manifold
{∈ M(,β)s_2,s_3 are linearly dependent at [0]},
counted modulo 2, is (dv_2+v_2d),β. Since the number of ends
must be even, this proves the assertion for j=2. The case j=3 is similar.
□
The homomorphism u_j:I^*(P)→ I^*+j(P) induced by v_j is independent of
the sections s_i. For u_3 this will follow from
Lemma <ref> below,
and a similar argument works for u_2.
We consider again the bundle P_0=[-1,1]× P over Y[0]=[-1,1]× Y.
Let U be an open subset of ^*(P_0) such that for all
,β∈^*(P) with (,β)≤3 and every
∈ M(,β) one has that [0]∈ U. A section s of
|_U is said to satisfy Property 3 if for all ,β
as above the map
M(,β)→, ↦ s([0])
is transverse to the zero-section in .
Let U⊂^*(P_0) be as in Definition <ref>
and suppose s,s' are sections of |_U satisfying Property 3.
Let v_3,v'_3 be the corresponding cup products defined as in
eqn:v3def. Then there is an endomorphism
H:C(P)→ C(P)
such that
v_3+v'_3=dH+Hd.
For a “generic” section of the map
f_β:M(,β)×[0,1]→,
↦(1-t)s([0])+ts'([0])+t(1-t)([0])
is transverse to the zero-section whenever (,β)≤3.
Fix such a and let Z_β denote the zero-set of f_β.
If (,β)=2 then Z_β is a finite set. Let H be the
homomorphism with matrix coefficients
H,β=#Z_β.
If (,β)=3 then Z_β is a compact
1–manifold-with-boundary. Counted modulo 2, the number of boundary points
of Z_β is (v_3+v'_3),β, whereas the number of
ends is (dH+Hd),β. These two numbers must agree, proving
the lemma.□
Let W be a smooth, compact, oriented, connected 4–manifold with two
boundary components, say W=-Y_0∪ Y_1. Let Q→ W be an
3 bundle, and let P_i be the restriction of Q to Y_i. Suppose
one of the following two conditions holds.
(i) At least one of the bundles P_0,P_1 is admissible.
(ii) Both Y_0 and Y_1 are homology spheres, the bundle Q is
trivial, and H_1(W;)=0 and b_+^2(W)=0.
Then the homomorphism T:I(P_0)→ I(P_1) induced by (W,Q) satisfies
Tu_j=u_jT for j=2,3.
Moreover, if (ii) holds then
T=:I^4(Y_0)→/2.□
If P→ Y is an admissible 3 bundle then u_3=0 on I(P).
By Proposition <ref> there is an Hermitian
2–plane bundle →^* such that ≈^0_.
For a “generic” section s of , we have
s([0])≠0 whenever lies in a moduli space M(,β)
of dimension at most 3. Given such a section s, let U be the
open subset of ^* where s≠0. Then |_U splits
as an orthogonal sum
|_U=⊕ L
of two complex line bundles. Hence |_U has a nowhere vanishing
trace-free skew-Hermitian endomorphism
(
[ i 0; 0 -i ]). This yields a non-vanishing section s' of |_U.
Let s be the restricion to U of a “generic” section of ,
and let v_3,v'_3 be the cup products defined by s,s', respectively.
Then v'_3=0, so by Lemma <ref> we have
v_3=dH+Hd.
By definition, v_3 induces the cup product u_3 in cohomology,
so u_3=0.□
Let Y be an oriented homology 3–sphere and Y' the result of (±1)
surgery on a knot in Y. Let n be a non-negative integer.
(i) If (u_3)^n=0 on I(Y) then (u_3)^n+1=0 on I(Y').
(ii) If (u_2)^n=0 on I(Y) and has genus 1
then (u_2)^n+1=0 on I(Y').
If R is a commutative ring and
A⟶ B⟶ C
an exact sequence of modules over the polynomial ring R[u] such
that u^m=0 on A and u^n=0 on C for non-negative integers m,n then
u^m+n=0 on B. (Here, u^0 acts as the identity map.)
Now suppose Y' is (-1) surgery on . (If instead Y' is (+1)
surgery on then the proof is similar with the roles of Y,Y' reversed.)
Let Y” be 0 surgery on and I(Y”) the instanton cohomology of
the non-trivial 3 bundle over Y”.
We apply the above observation to the long exact surgery sequence
(see <cit.>)
⋯→ I(Y”)→ I(Y)→ I(Y')→ I(Y”)→⋯
Statement (i) now follows from Proposition <ref>. To prove
(ii), recall that if P_T^3 is a non-trivial 3 bundle over the
3–torus then I(P_T^3) is non-zero in two degrees differing by
4 modulo 8 and zero in all other degrees. Therefore, u_2=0 on
I(P_T^3). If has genus 1 then by arguing as in the proof of
<cit.> we find that u_2=0 on I(Y”), from which
(ii) follows.□
As a special case of Proposition <ref> we have the following
corollary.
If Y is (±1) surgery on a knot in S^3 then u_3=0 on I(Y).
Let P→ Y be an 3 bundle. We assume P is admissible
if Y is not a homology sphere. Then the endomorphisms u_2 and u_3
on I(P)
are nilpotent. In other words, there is a positive integer n such that
u_2^n=0, u_3^n=0 on I(P).
We use the same link reduction schemes as
in the proofs of <cit.>.
In the present case there is no need to consider any reduced groups, as
the cup products u_j are defined on all of I(Y).□
We include here a result for oriented homology 3–spheres Y obtained by
adapting the proof of Proposition <ref> for j=2 to 2–dimensional
moduli spaces M(,θ). This result will be used in
Proposition <ref> below.
For any ∈^*(Y) we introduce the
temporary notation
M_:={∈ M(,θ)s_2∧ s_3=0 at [0], and _([0,∞))≥},
where is a small positive constant.
If M(,θ)<6 then M_ is a manifold-with-boundary, and
M_ has a description analogous to that of M_, just replacing
the inequality _([0,∞))≥
by an equality. We define homomorphisms
:C^2(Y)→/2, ^-:C^3(Y)→/2
on generators by
:=#( M_), ^-β:=# M_β.
v_2+^-d=.
Let ∈^*(Y), ()=2. Then M_ is a
1–manifold-with-boundary. The number of boundary points,
counted modulo 2, is
by definition, and this must agree with the number of ends of M_, which
is ( v_2+^-d).□
§.§ Commutators of cup products
Let Y be an oriented homology 3–sphere.
We introduce a degree 4 endomorphism
ϕ:C^*(Y)→ C^*+4(Y)
which will be used to describe the commutator of v_2 og v_3.
defn For any ,β∈^*(Y)
let 23(,β) be the subspace of × consisting of those
points (,t) satisfying the following conditions:
itemize
* s_1([-t])=0,
* s_2([t]) and s_3([t]) are linearly dependent.
If (β)-()≡48
then 23(,β) consists of a finite number of points (see part (I)
of the proof of Proposition <ref> below), and we set
ϕ,β:=#23(,β).
prop
If Y is an oriented integral homology 3-sphere then for “generic”
sections s_1,s_2,s_3 one has
equation
v_2v_3+v_3v_2+'=dϕ+ϕd.
Hence, on I(Y) one has
equation
u_2u_3+u_3u_2=_0'_0.
The proof will be given in Subsection <ref>.
Let v_3,v_3':C^*(Y)→ C^*+3(Y) be the cup products defined by “generic”
sections s,s' of . At least in degrees different from 4, the
commutator of v_3 and v_3' is given by a formula analogous to
eqn:v2v3chhom. This formula involves the homomorphism
ψ:C^p(Y)→ C^p+5(Y), p≠4
with matrix coefficients
ψ,β=#{(,t)∈× s([-t])=0=s'([t])}.
The condition p≠4 is imposed to make sure that factorizations through
the trivial connection do not occur in the moduli spaces M(,β).
For q≢3,48 one has
dψ+ψ d=v_3v'_3+v'_3v_3
as maps C^q(Y)→ C^q+6(Y).
If the sections s,s' are sufficiently close (in a certain
sense) then v_3=v_3' (see Lemma <ref> below)
and the following hold.
If the sections s,s' are sufficiently close then there exist
* an extension of ψ to a cochain map C^*(Y)→ C^*+5(Y)
defined in all degrees, and
* a homomorphism Ξ:C^*(Y)→ C^*+4(Y) such that
ψ=v_2v_3+dΞ+Ξ d.
The proof will be given in Subsection <ref>.
§ DEFINITION OF THE INVARIANT
Let Y be any oriented homology 3-sphere.
defn
We define a non-negative integer ζ_2(Y) as follows. If _0=0
on (u_3)⊂ I(Y) set ζ_2(Y):=0. Otherwise, let ζ_2(Y)
be the largest positive integer n for which there exists an
x∈(u_3) such that
_0u_2^kx=
0 for 0≤ k<n-1,
1 for k=n-1.
Here, u_2^k denotes the k'th power of the endomorphism u_2. Note that
if x is as in Definition <ref> then using
the relation eqn:u2u3 one finds that u_3u_2^kx=0 for
0≤ k≤ n-1.
defnSet (Y):=ζ_2(Y)-ζ_2(-Y).
An alternative description of will be given in
Proposition <ref> below.
If ('_0)⊂(u_3) in I^1(Y) then ζ_2(-Y)=0. Otherwise,
ζ_2(-Y) is the largest positive integer n for which the inclusion
(u_2^k'_0)⊂(u_3)+∑_j=0^k-1(u_2^j'_0)
in I(-Y)
holds for 0≤ k<n-1 but not for k=n-1.
Of course, in eqn:imu2delincl it suffices to sum over those j that are
congruent to k mod 4, since I(-Y) is mod 8 periodic.
Recall that I^q(Y) and I^5-q(-Y) are dual vector spaces for any
q∈/8. Furthermore, the maps
_0:I^4(Y)→/2, u_3:I^q(Y)→ I^q+j(Y)
are dual to
'_0:/2→ I^1(-Y), u_3:I^5-q-j(-Y)→ I^5-q(-Y),
respectively. In general, the kernel of a linear map between finite-dimensional
vector spaces is equal to the annihilator of the image of the dual map.
Applying this to _0u_2^j:I^4-2j(Y)→/2 we see that the inclusion
eqn:imu2delincl holds if and only if
(_0u_2^k)⊃(u_3)∩⋂_j=0^k-1(_0u_2^j)
in I(Y).
This proves the lemma.□
prop
Either ζ_2(Y)=0 or ζ_2(-Y)=0.
Suppose ζ_2(Y)>0, so there is an x∈ I^4(Y) such that
u_3x=0 and _0x=1. Then Proposition <ref> yields
'_0(1)=u_3u_2x, hence ζ(-Y)=0 by Lemma <ref>.□
We now reformulate the definition of ζ_2 in terms of the mapping cone of
v_3. This alternative definition will display a clear analogy with the
instanton h-invariant and will be essential for handling the algebra involved
in the proof of additivity of .
For q∈/8 set
MC^q(Y):=C^q-2(Y)⊕ C^q(Y),
and define
D:MC^q(Y)→ MC^q+1(Y), (x,y)↦(dx,v_3x+dy).
Then D∘ D=0, and we define MI(Y) to be the cohomology of the
cochain complex (MC(Y),D). The short exact sequence of cochain
complexes
0→ C^*(Y)→ MC^*(Y)τ→ C^*-2(Y)→0,
where (y)=(0,y) and τ(x,y)=x,
gives rise to a long exact sequence
equation
⋯→I^q-3(Y)u_3→I^q(Y)_*
→MI^q(Y)τ_*→I^q-2(Y)→⋯.
We introduce some extra structure on *j(Y). Firstly,
the homomorphisms
gather*
:=∘τ:MC^6(Y)→/2,
':=∘':/2→MC^1(Y)
induce homomorphisms
MI^6(Y)_0⟶/2'_0⟶
MI^1(Y).
We extend trivially to all of MC(Y), and similarly for _0.
Furthermore, we define a homomorphism
V:MC^*(Y)→ MC^*+2(Y), (x,y)↦(v_2x,ϕ x+v_2y).
A simple calculation yields
equation
DV+VD=',
which is analogous to the relation <cit.> in rational
instanton homology. It follows that V induces homomorphisms
gather*
MI^q(Y)→MI^q+2(Y), q≢6,78,
MI^6(Y)∩(_0)→MI^0(Y),
each of which will be denoted by U.
If _0=0 on MI^6(Y) then ζ_2(Y)=0. Otherwise,
ζ_2(Y) is the
largest positive integer n for which there exists a z∈ MI(Y)
such that
_0 U^kz=cases
0 for 0≤ k<n-1,
1 for k=n-1.
This follows immediately from the definitions.□
§ DEFINITE 4-MANIFOLDS
The goal of this section is to prove Theorem <ref>.
Let X be an oriented, connected
Riemannian 4–manifold with a cylindrical end [0,∞)× Y,
where Y is an integral homology sphere.
Suppose
b_1(X)=0=b^+(X).
Let E→ X be an oriented Euclidean 3–plane bundle and w_2(E)
its second Stiefel-Whitney class. We will count reducibles in
ASD moduli spaces for E
with trivial asymptotic limit.
Let w∈ H^2(X,;/2) be the unique
lift of w_2(E). Abusing notation, we denote by w_2(E)^2∈/4
the value of the Pontryagin square
w^2∈ H^4(X,;/4)
on the fundamental class in
H_4(X;;/4). Then for ∈^*(Y) the expected dimension of
a moduli space for E with asymptotic limit satisfies
M_(X,E;)≡()-2w_2(E)^28.
If ρ is a trivial connection in E|_ then (E,ρ) is an
integer reducing to -w_2(E)^2 modulo 4. Hence,
M_k:=M_k(X,E;θ)
is defined for integers k satisfying k≡-w_2(E)^24. Moreover,
M_k is empty for k<0, and M_0 (when defined) consists of flat connections.
The expected dimension is
M_k=2k-3.
§.§ Reducibles
In this subsection we restrict to k>0.
After perturbing the Riemannian metric on X in a small ball we can arrange
that M_k contains no twisted reducibles (see <cit.>).
The set M_k of reducible (i.e. Abelian)
points in M_k has a well known description
in terms of the cohomology of X, which we now recall. Let
P:={c∈ H^2(X;) [c]_2=w_2(E), c^2=-k},
where [c]_2 denotes the image of c in H^2(X;/2).
Let P:= P/±1 be the quotient of P by the involution
c↦-c.
There is a canonical bijection M_k→ P.
If [A]∈ M_k then A respects a unique splitting
E=⊕ L,
where is a trivial rank 1 subbundle of E. A choice of orientation
of defines a complex structure on L. Mapping [A] to the point in
P represented by c_1(L) yields the desired bijection. For further
details see <cit.> and <cit.>.□
Assuming P is non-empty we now express the number |P| of elements of P
in terms of the intersection form of X and the torsion subgroup
of H^2(X;). For any v∈ H^2(X;) let v̅ denote
the image of v in H^2(X;)/. Choose a∈ P and let
Q_a:={r∈ H^2(X;)/ r≡a̅ mod 2, r^2=-k}.
Define Q_a:= Q_a/±1.
|P|=|2|·|Q_a|.
Note that 2 has even order precisely when H^2(X;) contains an element
of order 4.
Because k>0 we have that (-1) acts without fixed-points on both
P and Q_a. Therefore,
| P|=2|P|, | Q_a|=2|Q_a|.
The short exact sequence 0→2→→/2→0 gives rise to a long
exact sequence
⋯→ H^2(X;)2→ H^2(X;)→ H^2(X;/2)→ H^3(X;)→⋯.
From this sequence we see that there is a well defined map
P→ Q_a, c↦c̅
which descends to an injective map
f: P/2→ Q_a.
In fact, f is bijective. To see that f is surjective, let r∈ Q_a.
Then
r=a̅+2x̅=a+2x
for some x∈ H^2(X;), and a+2x∈ P. This shows that
| P|=|2|·| Q_a|.
Combining this with eqn:2PQ we obtain the proposition.□
§.§ 2–torsion invariants of 4–manifolds
The proof of Theorem <ref> will involve certain 2–torsion
Donaldson invariants which we now define. Let d_0 be the smallest expected
dimension of any moduli space M_k=M_k(X,E;θ) that contains a reducible,
where k is a non-negative integer.
For any pair
(r,s) of non-negative integers satisfying
2r+3s≤ d_0+2
we will define an element
rs= rs(X,E)∈ I(Y)
which will be independent of the Riemannian
metric on X and also independent of the choice of small holonomy
perturbations.
To define rs, choose disjoint compact codimension 0 submanifolds
Z_1,…,Z_r+s of X and base-points z_j∈ Z_j.
It is convenient to assume that each of these submanifolds contains a band
[t_j,t_j+1]× Y for some t_j≥1. (We assume that the perturbed
ASD equation is of gradient flow type in the region [1,∞)× Y.)
Then
Proposition <ref> guarantees that
every perturbed ASD connection in E with irreducible limit will
restrict to an irreducible connection over each Z_j.
Choose “generic”
sections {_ij}_i=1,2,3 of the canonical 3–plane bundle
_j→^*(Z_j,E_j), where E_j:=E|_Z_j. For any ∈^*(Y)
let d=d() be the integer such that
0≤ d-2r-3s≤7,
d≡()-2w_2(E)^28.
Let M_r,s(X,E;) be the set of all ∈ M_(d)(X,E;) such that
* _2,j,_3,j are linearly dependent at |_Z_j for
j=1,…,r, and
* _1,j(|_Z_j)=0 for j=r+1,…,r+s.
Let
q_r,s:=∑_#M_r,s(X,E;)·∈ C(Y),
where the sum is taken over all generators in C(Y) of index
2w_2(E)^2+2r+3s. Then q_r,s is a cocycle, and we define
rs(X,E):=[q_r,s]∈ I(Y).
Standard arguments show that rs is
independent of the choice of submanifolds
Z_j and sections _ij.
Let k be an integer greater than one.
If M_ℓ is empty for ℓ<k then
k-20=#M_k.
Deleting from M_k a small neighbourhood of each reducible point
we obtain a manifold-with-boundary W with one boundary component P_η
for each reducible η, each such component being diffeomorphic to
k-2. Let
Ŵ:=W∩ M_k-2,0(X,E;θ)
be the set of all ∈ W such that _2,j and _3,j are linearly
dependent at |_Z_j for j=1,…,k-2.
Then Ŵ is a 1–manifold-with-boundary. For dimensional reasons and
because of the condition that M_ℓ be empty for ℓ<k, bubbling
cannot occur in sequences in Ŵ. Therefore, the only source of
non-compactness in Ŵ is factorization over the end of X, so
the number of ends of Ŵ equals k-20 modulo 2.
As for the boundary points of Ŵ,
observe that for every x∈ X the restriction of the 3–plane bundle
_θ,x→ M^*_k to P_η is isomorphic to the direct sum
⊕ L of a trivial real line bundle and the tautological
complex line bundle. It follows easily from this that P_η∩Ŵ
has an odd number of points for every reducible η, hence
|Ŵ|≡|M_k|2.
Since the number of boundary points of Ŵ must agree with the number of
ends when counted modulo 2, this proves the proposition.□
In the proof of the following proposition and at many places later we will
make use of a certain kind of cut-off function. This should be a smooth
function b:→ such that
b(t)=
0 for t≤-1,
1 for t≥1.
Suppose 2r+3s≤ d_0+2, so that rs is defined.
(i) rs=u_2r-1s if r≥1.
(ii) rs=u_3 rs-1 if s≥1.
We only spell out the proof of (ii), the proof of (i) being similar.
Let M_r,s-1(X,E;) be defined as above, but using only the submanifolds
Z_1,…,Z_r+s-1 and the corresponding sections _ij.
Choose a path :[-1,∞)→ X such that (-1)=z_r+1 and
(t)=(t,y_0) for t≥0, where y_0∈ Y is a base-point.
For any ∈^*(Y) and x∈ X let
_,x→ M_r,s-1(X,E;)
be the canonical 3–plane bundle associated to the base-point x.
For any =[A]∈ M_r,s-1(X,E;) and t≥-1 let
_,t:(_,(t))_→(_,(-1))_
be the isomorphism defined by the holonomy of A along .
Here, (_,x)_ denotes the fibre of the bundle _,x at
the point .
Given a “generic” section s of →^*(Y[0]) we define
a section s_ of the bundle
_,(-1)×[-1,∞)→ M_r,s-1(X,E;)×[-1,∞)
by
s_(,t):=(1-b(t-2))·_1,r+s(|_Z_r+s)
+b(t-2)·_,t(s([t])),
where b is as in eqn:b-prop1.
Let j:=2w_2(E)^2+2r+3s∈/8. If ()=j-1 then the zero set
s_(0) is a finite set. Summing over such we define
h_r,s:=∑_(#s_(0))·∈ I^j(Y).
Counting ends and boundary points of the 1–manifolds s_β(0)
for (β)=j we see that
dh_r,s+v_3q_r,s-1=q_r,s.
Passing to cohomology, we obtain (ii).□
If E is strongly admissible then D_r,s(X,E)=0 for s>0.
Let f:→ X be as in Definition <ref>
with v=w_2(E). For t≥0 let X t be the
result of deleting from X the open subset (t,∞)× Y.
Choose t>0 so large that X t contains f(). Then
E|_X t is strongly admissible.
Choose the submanifolds Z_1,…,Z_r+s such that Z_r+s=X t.
By Proposition <ref>
the (frame bundle of) _j→^*(E_r+s)
lifts to a 2 bundle.
For j=1,…,r+s-1 choose “generic” sections {_ij}_i=1,2,3
of _j. Arguing as in the proof of Proposition <ref>
we see that there is an open subset U⊂^*(Z_r+s,E_r+s)
and a section of _r+s such that if is any element of
a 3–dimensional moduli space M_r,s-1(X,E;) then |_Z_r+s∈ U
and (|_Z_r+s)≠0. Taking _1,r+s:= we have that all
0–dimensional moduli spaces M_r,s(X,E;) are empty.
Reasoning as in the proof of Lemma <ref> we conclude
that D_r,s=0.□
§.§ Lower bound on
Recall Definition <ref> above.
Given a space, X, a non-zero class w∈ H^2(X;)/torsion
is called strongly admissible
if some (hence every) lift of w to H^2(X;) maps to a strongly
admissible class in H^2(X;/2).
Let V be a smooth compact oriented connected 4-manifold whose boundary
is a homology sphere Y. Suppose the intersection form of V is negative
definite and at least one of the following two conditions holds:
(i) H^2(V;) contains no 2–torsion.
(ii) H^2(V;) contains no element of order 4, and
w^2≢04. Furthermore, either w is strongly admissible or
u_3=0 on I(Y) (or both).
Let
J:=H^2(V;)/torsion,
and let w be an element of J which is not divisible by 2.
Let k be the minimal square norm (with
respect to the intersection form) of any element
of w+2J. Let n be the number of elements of w+2J of square norm k.
If k≥2 and n/2 is odd then
equation
(Y)≥k-1.
Note that if we leave out case (ii) then the theorem says the same as
Theorem <ref>.
After performing surgery on a collection of loops in V representing
a basis for H_1(V;)/ we may assume that b_1(V)=0.
From the exact sequence eqn:2long-exact-seq we see that the
2–torsion subgroup of H^2(V;) is isomorphic to H^1(V;/2).
Let
X:=V∪(0,∞)× Y
be the result of adding a half-infinite cylinder to V, and choose a
Riemannian metric on X which is of cylindrical form over the end.
We identify the (co)homology of X with that of V. Choose a
complex line bundle L→ X whose Chern class represents w. Choose a
Euclidean metric on the 3–plane bundle
E:=⊕ L.
Since we assume that H^2(X;)
contains no element of order 4, it follows from Proposition <ref>
that M_ℓ contains an odd number of reducibles for ℓ=k
but no reducibles for 0<ℓ<k.
We now show that if w^2≡0 (4), so that M_0 is defined, then M_0
is free of reducibles. Suppose A is a connection in E
representing a reducible point in M_0. Then A preserves some orthogonal
splitting E=⊕ L', where → X is a real line bundle.
Because Condition (i) of the proposition must hold, the bundle is
trivial. Choose a complex structure on L'. Since L' admits a flat
connection, its Chern class c_1(L') is a torsion class in H^2(X;).
But c_1(L) and c_1(L') map to the same element of H^2(X;/2), namely
w_2(E), hence
c_1(L)=c_1(L')+2a
for some a∈ H^2(X;). This contradicts our assumption that w∈ J
is not divible by 2. Thus, M_0 is free of reducibles as claimed.
By Proposition <ref> we have
D_k-2,0≠0,
and Proposition <ref> says that
D_k-2,0=u_2^k-2D_0,0.
Now suppose w is strongly admissible (which is trivially the case
if Condition (i) holds). Then
the bundle E is strongly admissible, so by
Propositions <ref> and <ref> we have
u_3D_0,0=D_0,1=0.
This proves eqn:q2ineq.□
§ OPERATIONS DEFINED BY COBORDISMS
§.§ Cutting down moduli spaces
Let Y_0,Y_1,Y_2 be oriented (integral) homology 3–spheres and W a
smooth compact connected oriented 4–manifold such that
H_i(W;)=0 for i=1,2 and W=(-Y_0)∪(-Y_1)∪ Y_2. Then we call
W a (4–dimensional) pair-of-pants cobordism from Y_0∪ Y_1 to
Y_2, or a pair-of-pants cobordism from Y_1 to (-Y_0)∪ Y_2.
We will consider various operations on Floer cochain complexes induced by
pair-of-pants cobordism. To define these we first introduce some notation.
Let X be an oriented connected Riemannian 4–manifold with incoming
tubular ends (-∞,0]× Y_j, j=0,…,r and outgoing tubular ends
[0,∞)× Y_j, j=r+1,…,r', where each Y_j is an
homology sphere. For t≥0 let X t be the result of deleting
from X the open pieces (-∞,-t)× Y_j, j=0,…,r and
(t,∞)× Y_j, j=r+1,…,r'. We assume
X0 is compact. For i=0,…,r' let y_i∈ Y_i be a base-point
and set
e_i:=
-1, i=0,…,r,
1, i=r+1,…,r'.
For any integers j,k in the interval [0,r'] such that j<k
let _jk:→ X be a smooth path satisfying
_jk(t)∈ X1 for |t|≤1 and
_jk(t)=
(-e_jt,y_j), t≤-1,
(e_kt,y_k), t≥1.
Loosely speaking, the path _jk enters along the jth end
and leaves along the kth end of X.
Let =(_1,…,_r'), where _j∈(Y_j) and at least one
_j is irreducible. For the remainder of this subsection we write
M:=M(X,E;),
where E→ X is the product 3 bundle.
The unique continuation result of
Proposition prop:unique-continuation-cylinder ensures that if
_j is irreducible then the restriction of any
element of M to a band on the jth end of X will be irreducible.
Let → M× X be the universal (real) 3–plane bundle (see
<cit.>).
For any t≥0 let t denote the
restriction of to M× X t. Given a base-point x_0∈ X let
_X,x_0;→ M be the canonical 3–plane bundle,
which can be identified
with the restriction of to M×{x_0}.
If :J→ X is a smooth path in X defined on some interval J then
a section of the pull-back bundle (𝕀×)^* over
M× J is called holonomy invariant if
for all =[A]∈ M and real numbers s<t one has that (,s)
is mapped to (,t) by the isomorphism
_(,(s))→_(,(t))
defined by holonomy of A along the path |_[s,t].
Suppose Z⊂ X is a compact codimension 0 submanifold-with-boundary
such that A|_Z is irreducible for every [A]∈ M. Given a base-point
z_0∈ Z, let _Z,z_0→^*(E|_Z)
be the base-point fibration, and let
R_Z:M→^*(E|_Z), ↦|_Z.
Then the pull-back bundle R_Z^*_Z,z_0 is canonically isomorphic to
_X,z_0;, and we will usually identify the two bundles without further
comment.
Choose (smooth) sections z_1,z_2,z_3 of 2 and
for any x∈ X2 let
M∩ w_3(x):=
{∈ M z_1(,x)=0},
M∩ w_2(x):=
{∈ M
z_2,z_3 are linearly
dependent at (,x)}.
For j=0,…,r' let _j→^*(Y_j[0]) be the canonical
3–plane bundle associated to a base-point (0,y_j).
For j<k, any j', and i=1,2,3 choose
* a section ijk of _j
and a section ijk of _k,
* a section ijk of 2,
* a section s_ij' of _j'.
Let b_-1,b_0,b_1 be a partion
of unity of subordinate to the open cover
{(-∞,-1),(-2,2),(1,∞)}.
If j<k and both _j,_k are
irreducible we introduce, for i=1,2,3, a section
of the bundle (𝕀×_jk)^* associated, loosely speaking,
to a base-point moving along the path _jk. Precisely, we define
s_ijk(,t):=b_-1(t) ijk(|_Y_j[-e_jt])
+b_0(t) ijk(|_X2,_jk(t))
+b_1(t) ijk(|_Y_k[e_kt]).
Using these sections, we define cut-down moduli spaces
M∩ w_3(_jk):=
{(,t)∈ M× s_1jk(,t)=0},
M∩ w_2(_jk):=
{(,t)∈ M×
s_2jk, s_3jk are linearly
dependent at (,t)}.
We now consider the case of a base-point moving along the jth end.
For t≥0 let _j(t):=(e_jt,y_j). If _j is irreducible let
M∩ w_2(_j):={(,t)∈ M×[0,∞)
s_2j,s_3j are linearly dependent at |_Y_j[e_jt]}.
We omit the definition of M∩ w_3(_j) since it will not be
needed in the remainder of this paper
(although something close to it was used in the proof of
Proposition <ref>).
We can also combine the ways moduli spaces are cut down in
the above definitions. Namely, for ℓ,ℓ'∈{2,3} let
M∩ w_ℓ(x)∩ w_ℓ'(_jk):=
{(,t)∈ M∩ w_ℓ'(_jk)
∈ M∩ w_ℓ(x)},
M∩ w_ℓ(_jk)∩ w_ℓ'(_j'k'):=
{(,t,t')∈ M××
(,t)∈ M∩ w_ℓ(_jk),
(,t')∈ M∩ w_ℓ'(_j'k')},
M∩ w_ℓ(_jk)∩ w_2(_j'):=
{(,t,t')∈ M××[0,∞)
(,t)∈ M∩ w_ℓ(_jk),
(,t')∈ M∩ w_2(_j')}.
If one of the _js is trivial, say _h=θ, and M<8
(to prevent bubbling) then one can also cut
down M by, loosely speaking, evaluating w_2 or w_3 over the
“link of θ at infinity” over the hth end of X. We now make this
precise in the case of w_2 and an outgoing end [0,∞)× Y_h. The
definitions for w_3 or incoming ends are similar. To simplify notation
write Y:=Y_h.
We introduce a function τ^+=τ^+_h on M related to the energy
distribution of elements over the hth end.
Choose >0 so small that for any β∈(Y) the Chern-Simons
value (β)∈/ has no real lift in the interval (0,].
(Recall that we assume (θ)=0.)
Given ∈ M, if there exists a t>0 such that
_([t-2,∞)× Y)= then t is unique, and we write
t^+():=t. This defines t^+ implicitly as a smooth function on an open
subset of M. We modify t^+ to get a smooth function
τ^+:M→[1,∞) by
τ^+():=
1+b(t^+()-2)·(t^+()-1) if t^+() is defined,
1 else,
where the cut-off function b is as in eqn:b-prop1.
Note that τ^+()<3 if t^+()<3 and
τ^+()=t^+() if t^+()≥3.
The restriction of to the band Y[τ^+()] will be denoted by
R^+()∈(Y[0]).
In the above situation there is a real number T_0
such that if is any element of M satisfying
τ^+()>T_0-1 then R^+() is irreducible.
Suppose the lemma is false. Then we can find a sequence _n
in M such that τ^+(_n)→∞ and
R^+(_n) is reducible for every n. Let A_n be a smooth connection
representing _n, and let t_n=τ^+(_n).
By assumption, there is no bubbling in M, so
we can find gauge transformations u_n defined over [0,∞)× Y
and a smooth connection A' over such that, for every constant
c>0, the sequence u_n(A_n)|_[t_n-c,t_n+c] converges in C^∞
to A'|_[-c,c]. The assumption on means that no energy can be
lost over the end [0,∞)× Y in the limit, hence
_A'([-2,∞)× Y)=.
In particular, A' is not trivial. But there are no non-trivial reducible
finite-energy instantons over (as long as the perturbation of the
Chern-Simons functional is so small that there are no
non-trivial reducible critical points).
Therefore, A' must be irreducible. From the unique continuation result of
Proposition <ref> it follows that
A'|_{0}× Y is
also irreducible, so A_n is irreducible for large n.
This contradiction proves the lemma.
□Let T_0 be as in the lemma. For any element of M for which
R^+() is irreducible, let
s'_ih() denote the holonomy invariant section of
(𝕀×_h)^* such that s'_ih(,τ^+())=s_ih(R^+()).
Let x_h:=(0,y_h) and define a section of _X,x_h; by
s_ih():=(1-b(τ^+()-T_0))· z_i(|_X2,x_h)
+b(τ^+()-T_0)· s'_ih(R^+()),
where again b is as in eqn:b-prop1.
Let
M∩ w_2(τ^+):={∈ Ms_2h,s_3h linearly dependent
at }.
If j<k and both _j,_k are irreducible let
M∩ w_ℓ(_jk)∩ w_2(τ^+):=
{(,t)∈ M∩ w_ℓ(_jk)∈ M∩ w_2(τ^+)}.
If M is regular, then the various cut down moduli
spaces defined above will be transversely cut out when the sections involved
are “generic”.
§.§ Operations, I
We now specialize to the case when X has two incoming ends
(-∞,0]× Y_j, j=0,1 and one outgoing end [0,∞)× Y_2,
and
H_i(X;)=0, i=1,2.
Such a cobordism gives rise to a homomorphism
A:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q(Y_2)
for any p,q∈/8, with matrix coefficients
A(_0⊗_1),_2:=#M(X;)
for generators _0∈ C^p(Y_0), _1∈ C^q(Y_1), and
_2∈ C^p+q(Y_2), where =(_0,_1,_2). We can construct
more homomorphisms using the sections s_ijk chosen above.
For any path _jk as above and k=2,3 let
T_i,j,k:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+i-1(Y_2)
be defined on generators by
T_i,j,k(_0⊗_1),_2:=
#[M(X;)∩ w_i(_jk)].
For the cases used in this paper we introduce the simpler notation
B:=T_3,0,1, E:=T_3,0,2, A':=T_2,1,2.
We will also consider homomorphisms defined using two base-points, each
moving along a path in X. At this point we only define
B':C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+3(Y_2)
by
B'(_0⊗_1),_2:=
#[M(X;)∩ w_3(_01)∩ w_2(_12)].
In the next proposition, the differential in the cochain complex
C(Y_i) will be denoted by d (for i=0,1,2), and
d=d⊗1+1⊗ d
will denote the differential in C(Y_0)⊗ C(Y_1).
Let
v_3:=v_3⊗1+1⊗ v_3,
regarded as a degree 3 cochain map from C(Y_0)⊗ C(Y_1) to itself.
(i) dA+A d=0.
(ii) dB+B d=A v_3.
(iii) dE+E d=A(v_3⊗1)+v_3A.
(iv) dA'+A' d=A(1⊗ v_2)+v_2A.
(v) dB'+B' d=B(1⊗ v_2)+v_2B
+A' v_3+A(1⊗ϕ)+A_θ(1⊗).
The only non-trivial part here is (v), where one encounters
factorization through the trivial connection over the end
(-∞,0]× Y_1. This can be handled as in the proof of
Proposition <ref> given in Subsection <ref>,
to which we refer for details.□
The homomorphism
:MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2),
(x_0,y_0)⊗(x_1,y_1) ↦ B(x_0,x_1)+A(x_0⊗ y_1+y_0⊗ x_1)
is a cochain map of degree -2.
Let D=D⊗1+1⊗ D be the differential in the complex
MC(Y_1)⊗ MC(Y_2). Then
D[(x_0,y_0)⊗(x_1,y_1)] =
[(dx_0,v_3x_0+dy_0)⊗(x_1,y_1)+(x_0,y_0)⊗(dx_1,v_3x_1+dy_1)]
=B(dx_0⊗ x_1+x_0⊗ dx_1)
+A[dx_0⊗ y_1+(v_3x_0+dy_0)⊗ x_1+
x_0⊗(v_3x_1+dy_1)+y_0⊗ dx_1]
=B d(x_0⊗ x_1)
+A[ v_3(x_0⊗ x_1)+ d(x_0⊗ y_1+y_0⊗ x_1)]
=d[(x_0,y_0)⊗(x_1,y_1)],
where the last equality follows from Proposition <ref>.□
The homomorphism
MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2)
obtained from Proposition <ref> will also be denoted by .
In order to simplify notation we will often write ,
instead of _0,_0 if no confusion can arise.
For all a∈ MI(Y_0), b∈ MI(Y_1), the following hold.
(i) If a=0 then (Ua,b)=u_2(a,b).
(ii) If b=0 then (a,Ub)=u_2(a,b).
We spell out the proof of (ii). Reversing the roles of Y_0,Y_1
yields a proof of (i). Let
',:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2)
be given by
'[(x_0,y_0)⊗(x_1,y_1)]
:= B'(x_0,x_1)+A'(x_0⊗ y_1+y_0⊗ x_1),
[(x_0,y_0)⊗(x_1,y_1)]
:=( x_1)A_θ(x_0).
Let D be as in the proof of Proposition <ref>. We show that
d'+' D=v_2+(1× V)+,
from which (ii) follows. Observe that the first four lines
in the calculation of D in
Proposition <ref> carry over to ' D.
That proposition then gives
' D [(x_0,y_0)⊗(x_1,y_1)]
=(B' d+A' v_3)(x_0⊗ x_1)
+A' d(x_0⊗ y_1+y_0⊗ x_1)
=dB'(x_0⊗ x_1)+B(x_0⊗ v_2x_1)+v_2B(x_0⊗ x_1)
+A(x_0⊗ϕ x_1)+( x_1)A_θ(x_0)
+[dA'+A(1⊗ v_2)+v_2A](x_0⊗ y_1+y_0⊗ x_1)
=[d'+v_2+(1× V)+][(x_0,y_0)⊗(x_1,y_1)].□
Our next goal is to compute u_2. To this end we introduce some
variants Ȧ,Ḃ,A^+,B^+ of the operators A,B. Each of these
variants is a homomorphism
C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+d(Y_2)
for d=2,4,1,3, respectively, defined for all p,q, and the matrix
coefficients are
Ȧ(_0⊗_1),_2 :=
#[M(X;)∩ w_2(x_2)],
Ḃ(_0⊗_1),_2 :=
#[M(X;)∩ w_2(x_2)∩ w_3(_01)],
A^+(_0⊗_1),_2 :=
#[M(X;)∩ w_2(_2)],
B^+(_0⊗_1),_2 :=
#[M(X;)∩ w_3(_01)∩ w_2(_2)],
where =(_0,_1,_2) as before, x_2=_2(0)∈ X, and
_i,_ij are as in Subsection <ref>.
(i) dȦ+Ȧ d=0.
(ii) dḂ+Ḃ d=Ȧ v_3.
(iii) dA^++A^+ d=v_2A+Ȧ.
(iv) dB^++B^+ d=A^+ v_3+v_2B+Ḃ.
Standard.□
The homomorphism
:MC^*(Y_0)⊗ MC^*(Y_1) → C^*(Y_2),
(x_0,y_0)⊗(x_1,y_1) ↦Ḃ(x_0,x_1)
+Ȧ(x_0⊗ y_1+y_0⊗ x_1)
is a (degree preserving) cochain map.
The same as for Proposition <ref>, using
Proposition <ref> (i), (ii).□
The homomorphism
MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2)
obtained from Proposition <ref> will also be denoted by .
As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has
=u_2.
This is analogous to the proof of Proposition <ref>. Let
^+:MC^*(Y_0)⊗ MC^*(Y_1)→ C^*(Y_2)
be given by
^+[(x_0,y_0)⊗(x_1,y_1)]
:= B^+(x_0,x_1)+A^+(x_0⊗ y_1+y_0⊗ x_1).
We show that
d^++^+ d=v_2+.
From Proposition <ref> we get
^+ D(x_0,y_0)⊗(x_1,y_1)
=(B^+ d+A^+ v_3)(x_0⊗ x_1)+A^+ d(x_0⊗ y_1+
y_0⊗ x_1)
=(dB^++v_2B+Ḃ)(x_0⊗ x_1)
+(dA^++v_2A+Ȧ)(x_0⊗ x_1)
=(d^++v_2+)(x_0,y_0)⊗(x_1,y_1).□
We also need to bring in moduli spaces over X with trivial limit over the
end _+× Y_2. These give rise to homomorphisms
A^θ,B^θ,Ȧ^θ,Ḃ^θ:C^p(Y_0)⊗ C^d-p(Y_1)→/2
where d=5,3,3,1, respectively. They are defined on generators by
A^θ(_0⊗_1) :=#M(_0,_1,θ),
B^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_3(_01)],
Ȧ^θ(_0⊗_1) :=#[M(_0,_1,θ)∩ w_2(x_0),
Ḃ^θ(_0⊗_1)
:=#[M(_0,_1,θ)∩ w_2(x_0)∩ w_3(_01).
(i) A+ A^θ d=0.
(ii) B+B^θ d=A^θ v_3.
(iii) Ȧ+Ȧ^θ d=0.
(iv) Ḃ+Ḃ^θ d=
Ȧ^θ v_3+⊗.
Here, (⊗)(x_0⊗ x_1)=( x_0)( x_1).
The proof is standard.
(i) =0.
(ii) u_2=⊗.
Statement (i) is proved just as Proposition <ref>, replacing
Proposition <ref> by Proposition <ref>.
We now prove (ii). For g_i=(x_i,y_i)∈ MC(C_i), i=0,1 let
^θ(g_0⊗ g_1):=Ḃ^θ(x_0⊗ x_1)
+Ȧ^θ(x_0⊗ y_1+y_0⊗ x_1).
Arguing as in the proof of Proposition <ref> and using
Proposition <ref> we obtain
^θ D(g_0⊗ g_1)
=(Ḃ^θ d+Ȧ v_3)(x_0⊗ x_1)
+Ȧ^θ d(x_0⊗ y_1+y_0⊗ x_1)
=Ḃ(x_0⊗ x_1)+ x_0· x_1
+Ȧ(x_0⊗ y_1+y_0⊗ x_1)
=(+⊗)(g_0⊗ g_1).
If g_0,g_1 are cocycles then by Proposition <ref> we have
v_2(g_0⊗ g_1)=(g_0⊗ g_1)
= g_0· g_1.□
For p≠4 let
F:C^p(Y_0)⊗ C^q(Y_1)→ C^p+q+4(Y_2)
be defined by
F(_0⊗_1),_2:=
#[M(X;)∩ w_3(_01)∩ w_3(_02)].
For p=4 the map F may not be well-defined due to possible factorizations
through the trivial connection over the end _-× Y_0.
The definition of F involves two different sections of the bundle
_0→^*(Y_0[0]), namely
s_k:= 10k, k=1,2.
From now on we assume s_1,s_2 are so close that
they define the same cup product v_3:C^*(Y_0)→ C^*+3(Y_0).
If the sections s_1,s_2 are sufficiently close then the map
F in eqn:Fdef can be extended to all bidegrees (p,q) such that
dF+F d=B(v_3⊗1)+v_3B+E v_3+A(ψ⊗1),
where ψ is as in Proposition <ref>.
The main difficulty in extending the map F to degree p=4,
related to factorization through the trivial connection over the end
(-∞,0]× Y_0, is the same as in
extending the map ψ to degree 4, and the main difficulty in
proving eqn:Fthm is the same as in proving that ψ is a cochain map
(Proposition <ref>). As we prefer to explain the ideas involved
in the simplest possible setting, we will not spell out the proof
of Proposition <ref> but instead refer to
Subsection <ref> for details.
Sometimes we will fix the variable _1 in the expressions defining
A,B,E,F. Thus, for any y∈ C^r(Y) we define a homomorphism
A_y:C^*(Y_0)→ C^*-r(Y_2), x↦ A(x⊗ y),
and we define B_y,E_y,F_y similarly. Looking at moduli spaces over X
with trivial limit over the end _-× Y_1 we obtain homomorphisms
A_θ :C^*(Y_0)→ C^*(Y_2),
E_θ :C^*(Y_0)→ C^*+2(Y_2).
with matrix coefficients
A_θ(_0),_2 :=#M(X;_0,θ,_2),
E_θ(_0),_2 :=#[M(X;_0,θ,_2)∩ w_3(_02)].
We consider a variant of Floer's complex introduced by Donaldson
<cit.>.
For any oriented homology 3–sphere Y let *(Y) be the complex
with cochain groups
p(Y) =C^p(Y), p≠0,
0(Y) =C^0(Y)⊕/2
and differential d̅=d+'.
Now take Y:=Y_1. For y=(z,t)∈0(Y_1) let
A_y:=A_z+tA_θ, E_y:=E_z+tE_θ.
For any x∈ C(Y_1) and y∈*(Y_1) we have
[d,A_y]+A_d̅y =0,
[d,E_y]+E_d̅y =[A_y,v_3],
[d,B_x]+B_dx =A_xv_3+A_v_3x,
[d,F_x]+F_dx =[B_x,v_3]+E_xv_3+E_v_3x+A_xψ.
Here, [d,A_y]=dA_y+A_yd, and similarly for the other commutators.
For y∈ C(Y_1) this follows from Propositions <ref> and
<ref>, whereas the case y=(0,1)∈0(Y_1) is easy.□
Suppose x∈ C^-2(Y_1) and y=(z,t)∈0(Y_1) satisfy
dx=0, v_3x=d̅y.
Then the homomorphism :MC^*(Y_0)→ MC^*(Y_2) given by the matrix
(
[ A_y+B_x A_x; E_y+F_x+A_xΞ A_y+B_x+E_x+A_xv_2 ])
is a cochain map.
Writing =([ P Q; R S ])
we have
d+ d=(
[ dP+Pd+Qv_3 dQ+Qd; dR+Rd+v_3P+Sv_3 dS+Sd+v_3Q ]).
The fact that this matrix vanishes is easily deduced from
Propositions <ref> and <ref> and Lemma <ref>.
We write out the calculation only for the bottom
left entry.
[d,E_y +F_x+A_xΞ]
=E_v_3x+[v_3,A_y]+[v_3,B_x]+E_v_3x+E_xv_3+A_xψ+A_x[d,Ξ]
=v_3(A_y+B_x)+(A_y+B_x+E_x+A_xv_2)v_3,
hence [d,R]=v_3P+Sv_3 as claimed.□
As maps MI^*(Y_0)⊗ MI^*(Y_1)→ I^*(Y_2) one has
u_3=0.
For j=0,1 let (x_j,y_j) be a cocycle in MC(Y_j), i.e.
dx_j=0, v_3x_j=dy_j.
Let the map of Lemma <ref> be defined with
x=x_1, y=y_1, and let (x_2,y_2):=(x_0,y_0). Then
((x_0,y_0)⊗(x_1,y_1))=B_x_1(x_0)+A_y_1(x_0)+A_x_1(y_0)=x_2.
Since (x_2,y_2) is a cocycle, we have v_3x_2=dy_2, proving the proposition.
□
If (Y_j)≥1 for j=0,1 then
(Y_2)≥(Y_0)+(Y_1).
For j=0,1 let n_j:=(Y_j) and choose z_j∈ MI(Y_j) such that
U^kz_j=cases
0 for 0≤ k<n_j-1,
1 for k=n_j-1.
Let x:=(z_0⊗ z_1)∈ I(Y_2). Then u_3x=0 by
Proposition <ref>. For 0≤ k_j≤ n_j-1, repeated
application of Proposition <ref> yields
u_2^k_0+k_1x=(U^k_0z_0⊗ U^k_1z_1),
hence u_2^k_0+k_1x=0 by Proposition <ref>. Therefore,
u_2^mx=0, 0≤ m≤ n_1+n_2-2.
On the other hand,
u_2^n_1+n_2-1x = u_2u_2^n_0-1u_2^n_1-1x
= u_2(U^n_0-1z_0⊗ U^n_1-1z_1)
=( U^n_0-1z_0)( U^n_1-1z_1)
=1.
Therefore, (Y_2)≥ n_0+n_1 as claimed.□
We will give a second application of Lemma <ref>, but first we need
some preparation. Let A^θ_θ:C^5(Y_0)→/2
be defined on generators by
A^θ_θ():=#M(,θ,θ).
For y=(z,t)∈ q(Y_1) define
A^θ_y:C^5-q(Y_0)→/2 and B^θ_z:C^3-q(Y_0)→/2
by
A^θ_y(x):=A(x⊗ z)+tA^θ_θ(x),
B^θ_z(x):=B^θ(x⊗ z).
(i) A_θ+A^θ_θ d+A^θ_'(1)=.
(ii) A_y+A^θ_y d+A^θ_d̅y=t.
(iii) B_z+B^θ_z d+B^θ_dz=A^θ_zv_3+A^θ_v_3z.
If (Y_0)≥1 and (Y_1)=0 then (Y_2)≥1.
Since (Y_0)≥1 we can find (x_0,y_0)∈ MC^6(Y_0) such
that
dx_0=0, v_3x_0=dy_0, x_0=1.
Since (Y_1)=0, Lemma <ref> says that there exist
x_1∈ C^-2(Y_1) and y_1=(z_1,1)∈ 0(Y_1) such that
dx_1=0, v_3x_1=d̅y_1.
Let be as in Lemma <ref>. Then (x_0,y_0) is a cocycle
in MC(Y_2), and by Lemma <ref> we have
(x_0,y_0) =(A_y_1+B_x_1)x_0+ A_x_1y_0
=(_d̅y_1++_x_1v_3+_v_3x_1)x_0+_x_1dy_0
=1.
Therefore, (Y_2)≥1.□
§.§ Operations, II
We now consider the case when X has one incoming end (-∞,0]× Y_0
and two outgoing ends [0,∞)× Y_1 and [0,∞)× Y_2,
where Y_2==(2,3,5) is the Poincaré homology sphere oriented as the
boundary of the negative definite E_8–manifold. We again assume that
H_i(X;)=0, i=1,2.
We will define homomorphisms
P,P',Q:C^*(Y_0)→ C^*+d(Y_1)
where d=2,3,4, respectively, making use of cut-down moduli spaces
introduced at the end of Subsection <ref> with
h=2, so that τ^+=τ^+_2.
We define P,P',Q on generators by
P_0,_1 :=#[M(X;_0,_1,θ)∩ w_2(τ^+)],
P'_0,_1 :=
#[M(X;_0,_1,θ)∩ w_2(_01)∩ w_2(τ^+)],
Q_0,_1 :=
#[M(X;_0,_1,θ)∩ w_3(_01)∩ w_2(τ^+)].
As maps C(Y_0)→ C(Y_1) the following hold.
(i) [d,P]=0.
(ii) [d,P']=[v_2,P].
(iii) [d,Q]=[v_3,P]+'.
(iv) P+Pd=.
Here, is as defined at the end of Subsection <ref>.
In (iii), argue as in the proof of Proposition <ref>
to handle
factorization through the trivial connection over X.□
Note that statements (i), (iii) are equivalent to the fact that the
homomorphism
Ψ=
([ P 0; Q P ])
:MC^*(Y_0)→ MC^*+2(Y_1)
satisfies
[D,Ψ]='.
The homomorphism I^*(Y_0)→ I^*+2(Y_1) induced by P will also be denoted
by P.
As maps I(Y_0)→ I(Y_1) the following hold.
(i) [u_2,P]=0.
(ii) [u_3,P]='.
(iii) P= u_2.
Combine Propositions <ref> and <ref>.□
If (Y_0)≥2 then
(Y_1)≥(Y_0)-1.
Let n:=(Y_0) and choose x∈ I(Y_0) such that u_3x=0 and
u_2^kx=
0 for 0≤ k<n-1,
1 for k=n-1.
By Proposition <ref> we have u_3Px=0 and
u_2^kPx= Pu_2^kx= u_2^k+1x=
0 for 0≤ k<n-2,
1 for k=n-2.
This shows that (Y_1)≥ n-1.□
§.§ Additivity of
Throughout this subsection, Y,Y_0,Y_1 will denote oriented homology
3–spheres. As before, will denote the Poincaré homology sphere.
If (Y_j)≥1 for j=1,2 then
(Y_0# Y_1)≥(Y_0)+(Y_1).
Recall that there is a standard cobordism W from (-Y_0)∪(-Y_1)
to Y_0# Y_1. By attaching half-infinite tubular ends to W we obtain
a manifold X to which we can apply the results of
Subsection <ref>. The proposition now follows from
Proposition <ref>.□
If (Y_0)≥1 and (Y_1#(-Y_0))=0 then (Y_1)≥1.
This follows from Proposition <ref>.□
If (Y#)≥2 then
(Y)≥(Y#)-1.
This follows from Proposition <ref>
with Y_0=Y# and Y_1=Y.
□
In the following, we write Y_0∼ Y_1 to indicate that Y_0 and
Y_1 are homology cobordant.
If Y_0# Y_1∼ then (Y_0)+(Y_1)=1.
Let k_j:=(Y_j).
Case 1: n_0n_1=0. Without loss of generality we may assume that
n_1=0. By Proposition <ref> we have n_0≥1.
If n_0≥2 then,
since Y_0∼#(-Y_1), Proposition <ref> would give
-n_1=(-Y_1)≥(#(-Y_1)-1≥1,
a contradiction. Hence, n_0=1, so the lemma holds in this case.
Case 2: n_0n_1>0. We show that this cannot occur.
If k_j>0 then Proposition <ref> yields
1=()≥ n_0+n_1≥2,
a contradiction. Similarly, if k_j<0 then the same proposition yields
-1=(-)≥2.
Case 3: n_0n_1<0. Then we may assume that n_0>0.
Applying Proposition <ref> we obtain
n_0=(#(-Y_1))≥1-n_1≥2.
Proposition <ref> now gives -n_1≥ n_0-1.
Altogether, this shows that
n_0+n_1=1.□
(Y#)=(Y)+1.
Apply the lemma with Y_0=Y# and Y_1=-Y.□
For any oriented integral homology 3–spheres Y_0,Y_1 one has
(Y_0# Y_1)=(Y_0)+(Y_1).
Let k_j:=(Y_j) and Z_j:=Y_j#(-k_j).
By Corollary <ref> we have (Z_j)=0, so by
Proposition <ref>,
0=(Z_0# Z_1)=(Y_0# Y_1#(-n_0-n_1))=(Y_0# Y_1)-n_0-n_1.
□
§ FURTHER PROPERTIES OF . EXAMPLES
§.§ Proof of Theorem <ref>
Let W' be the result of connecting the two boundary components of W
by a 1–handle. Then W and W' have the same second cohomology group
and the same intersection form.
Let Z be the negative definite E_8–manifold (i.e. the result of
plumbing on the E_8 graph), so that the boundary of Z
is the Poincaré sphere . We will apply Theorem <ref>
to the boundary-connected sum
V:=W'#_∂ Z.
Let S,S'⊂ Z be embedded oriented 2–spheres corresponding to adjacent
nodes on the E_8 graph. These spheres both have self-intersection number
-2, and S· S'=1. Let
v=P.D.([S])∈ H^2(V, V)≈ H^2(V)
be the Poincaré dual of the homology class in V represented by S. Then
v·[S']=1, hence v is strongly admissible. The class
w∈ J_V represented by v satisfies w^2=-2, and
± w are the only classes in w+2J_V with square norm 2.
Theorem <ref>
and Proposition <ref> now yield
(Y)+1=(Y#)≥1,
hence (Y)≥0 as claimed.□
§.§ Proof of Theorem <ref>
Theorem <ref> is an immediate consequence of the following
two propositions.
Let K,K' be knots in S^3 such that K' is obtained from K by changing
a positive crossing. Let Y,Y' be (-1) surgeries on K,K', respectively.
Then
0≤(Y')-(Y)≤1.
We observe that Y' is obtained from Y by (-1) surgery on a linking
circle of the crossing such that
bounds a surface in Y of genus 1.
The surgery cobordism W from Y to Y' satisfies H_1(W;)=0 and
b^+_2(W)=0, hence (Y')≥(Y) by Theorem <ref>. Since Y
bounds a simply-connected negative definite 4–manifold (the trace of the
surgery on K) we have (Y)≥0 by the same theorem.
Let Y” be 0–surgery on .
By Floer's surgery theorem <cit.> there is a long exact sequence
⋯→ I(Y”)→ I(Y)ϕ→ I(Y')ψ→ I(Y”)→⋯
where ϕ is induced by the cobordism W.
Let n:=(Y') and suppose n≥2, the proposition already being proved
for n=0,1. Then there is a b∈ I(Y') such that
u_2^jb=
0, 0≤ j<n-1,
1, j=n-1.
By Proposition <ref> we have
ψ u_2b=u_2ψ b=0,
hence u_2b=ϕ a for some a∈ I(Y). For j≥0 we have
u_2^j a= u_2^jϕ a= u_2^j+1 b.
Combining this with Corollary <ref> we
obtain (Y)≥ n-1=(Y')-1 and the proposition is proved.□
If Y is (-1) surgery on a positive knot K in S^3 then (Y)=0.
This follows from Theorem <ref> because Y bounds
simply-connected 4–manifolds V_± where V_+ is
positive definite and V_- is negative definite. As V_- one can take the
trace of the (-1) surgery on K. On the other hand, since K can be
unknotted by changing a collection of positive crossings, the observation in
the beginning of the proof of Proposition <ref>
yields V_+.□
§.§ Proof of Proposition <ref>
Let Y_k:=(2,2k-1,4k-3). Then Y_k bounds the simply-connected
4–manifold V_k obtained by plumbing according the weighted graph
in Figure 1,
where the total number of nodes is 4k.
Let e_1,…,e_4k be an orthonormal basis for ^4k. The
intersection form of V_k is isomorphic to the lattice
_4k:=
{∑_i x_ie_i
2x_i∈, x_i-x_j∈, ∑_i x_i∈2},
with the nodes of the plumbing graph corresponding to the following
elements of _4k:
1/2∑_i=1^4ke_i, e_2+e_3,
(-1)^j(e_j-1-e_j), j=3,…,4k.
Let w∈ J_k=H^2(V_k;) be the element corresponding to
1/2∑_i=1^4ke_i. Since ± w are the only elements
of minimal square norm
in w+2J_k it follows from Theorem <ref> that
(Y_k)≥ k-1.
On the other hand, Y_k is also the result of (-1) surgery on the
torus knot T_2,2k-1. Since T_2,2k-1 can be unknotted by changing k-1
crossings we deduce from Theorem <ref> that
(Y_k)≤ k-1.
This proves the proposition.□
§.§ Proof of Theorem <ref>
Since we will use different coefficient rings R, the homomorphism
:C^4(Y;R)→ R
defined in Subsection <ref> will now be denoted by
_R.
By definition, the condition h(Y)>0 means that there exists a cocycle
w∈ C^4(Y;) such that _ w≠0. Note that replacing the
coefficient group by yields an equivalent condition.
On the other hand, the condition (Y)>0 means that there exists a
cocycle z∈ C^4(Y;/2) such that _/2z≠0 and such that the
cohomology class of z is annihilated by u_3. If in addition z lifts
to an integral cocycle z∈ C^4(Y;) then _ z must be odd,
in particular non-zero, hence h(Y)>0.
Now suppose (Y)>0 and h(Y)≤0.
The above discussion shows that the homomorphism
I^4(Y;)→ I^4(Y;/2) is not surjective, hence the Bockstein homomorphism
I^4(Y;/2)→ I^5(Y;) is non-zero. This proves the theorem.□
§.§ Proofs of Theorems <ref> and
<ref>
Proof of Theorem <ref>:
Part (i) was proved in <cit.> using Seiberg-Witten
theory. To prove (ii), let =(2,3,5). Then ()=1 by
Proposition <ref>. If H^2(X;) contains no 2–torsion
then (ii) follows from Corollary <ref>. Under the weaker assumption
that H^2(X;) contains no element of order 4, we can appeal to
Theorem <ref> since u_3=0 on I().□
Proof of Theorem <ref>:
Let be the
monopole h–invariant defined in <cit.>. (One could
equally well use the correction term d.) Then ()=-1, and
additivity of yields (#)=-2. If ξ is any
characteristic vector for J_X then by <cit.> one has
-(Y)≥1/8(b_2(X)+ξ·ξ).
Let J_X=m-1⊕ J_X as in Corollary <ref>. By
assumption, J_X is even, so
J_X has characteristic vectors ξ with ξ·ξ=-m. Therefore,
J_X=b_2(X)-m≤16.
By the classification
of even unimodular definite forms of rank ≤16 (see <cit.>) one has
J_X=0, -E_8, -2E_8, or -_16.
It only remains to rule out J_X=-_16.
Recalling that is the result of
(-1) surgery on the negative trefoil knot and applying
Proposition <ref>
twice we find that u_2^2=0 on I^*(#), hence
(#)≤2. On the other hand,
if J_X=-_16 then applying Theorem <ref> as in the proof
of Proposition <ref> we would obtain
(#)≥3, a contradiction. This proves the theorem.□
§ TWO POINTS MOVING ON A CYLINDER, I
The main goal of this section is to prove
Proposition <ref>. The first two subsections
will introduce some concepts used in the proof, which appears in the final
subsection.
§.§ Energy and holonomy
Let Y be an oriented (integral) homology 3–sphere with base-point y_0.
Let
→^*(Y[0])
be the canonical oriented Euclidean 3–plane bundle, where
Y[0]=[-1,1]× Y as in eqn:ybt-def.
Let ,β∈(Y), not both reducible. Over M(,β)× there
is a canonical 3–plane bundle β
obtained by pulling back the universal bundle over
M(,β)×× Y by the map (,t)↦(,t,y_0).
There is a canonical isomorphism β→ R^* where
R:M(,β)×→^*(0), (,t)↦[t],
so we can identify the fibre of β at (,t) with
the fibre _[t] of at [t].
Recall from Subsection <ref>
that a section of β is called holonomy invariant if
for all =[A]∈ and real numbers s<t one has that (,s)
is mapped to (,t) by the isomorphism
equation*
_[s]→_[t].
defined by holonomy of A along the path [s,t]×{y_0}.
Let be the set of elements of ^*(0) that can be
represented by flat connections.
Choose three sections ρ_1,ρ_2,ρ_3 of which form a positive
orthonormal basis at every point in some neighbourhood of .
Choose >0 so small that the following three conditions hold:
description
(i)If A is any instanton over (-∞,2]× Y satisfying
A(-∞,2]< such that the flat limit of A is
irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0].
(ii)If A is any instanton over [-2,∞)× Y satisfying
A[-2,∞)< such that the flat limit β of A is
irreducible then ρ_1,ρ_2,ρ_3 are orthonormal at A[0].
(iii)For each pair ,β∈(Y) the difference
()-(β)∈/ has no real lift in the half-open interval
(0,2].
Here, _A refers to the energy of A as defined in
eqn:def-energy.
Let ,β be distinct elements of (Y). If [A]∈ M(,β) then
_A()>2,
since the left hand side is a positive real lift of
()-(β). We can therefore define smooth functions
τ^-,τ^+:M(,β)→
implicitly by
_A((-∞,τ^-(A)+2])=
=_A([τ^+(A)-2,∞)).
We will consider the average and difference
τ_a:=1/2(τ^++τ^-), τ_d:=τ^+-τ^-.
Clearly, τ_d>0.
There are translationary invariant smooth restriction maps
R^±:M(,β)→^*(0), ↦[τ^±()]
which, by the unique continuation result of
Proposition prop:unique-continuation-cylinder, descend to injective maps
Ř^±:(,β)→^*(0).
If is irreducible then for any =[A]∈ M(,β) the vectors
equation
ρ_i(R^-()), i=1,2,3
form an orthonormal basis for _R^-(), by choice of .
Let ρ^-_i be the holonomy invariant section of β whose
value at (,τ^-()) is ρ_i(R^-()).
Similarly, if β is irreducible, then the vectors
ρ_i(R^+()) form an orthonormal basis for _R^+().
Let ρ^+_i be the holonomy invariant section of β whose
value at (,τ^+()) is ρ_i(R^+()).
If ,β are both irreducible let
h=(h_ij):M(,β)→3
be the map whose value at [A] is the holonomy of A along
[τ^-(A),τ^+(A)]×{y_0} with respect to the bases described above,
so that
ρ^-_j(,t)=∑_ih_ij()ρ^+_i(,t).
§.§ Factorization through the trivial connection
Now assume ()=4, (β)=1. We will introduce real valued
functions ^± on M(,β) which measure the extent
to which a given element factors through the trivial connection over Y.
Set
M_,θ:=R^-(M(,θ)),
which is a finite subset of ^*(0).
Let M_ be the union of all subsets
R^-(M(,β'))⊂^*(0) where β'∈^*(Y) and
M(,β')≤4. Note that M_ is compact.
Choose an open neighbourhood U_ of M_,θ in ^*(0) such that
itemize
* the closure of U_ is disjoint from M_,
* U_ is the disjoint union of open sets
U_,i, i=1,…,r, each of which
contains exactly one point from M_,θ.
Choose a closed neighbourhood U'_ of M_,θ contained in U_
and a smooth function
equation
e_:→[0,∞)
such that e_=1 on U'_ and e_=0 outside U_. Define the
translationary invariant function
λ^-:M(,β)→[0,∞), ↦ e_(R^-())·τ_d().
The function ^+ is defined in a symmetrical fashion (corresponding to
reversing the orientation of Y).
Let M_β be the union of all subsets
R^+(M(',β))⊂^*(0) where '∈^*(Y) and
M(',β)≤4.
Choose an open neighbourhood V_β of
M_θ,β:=R^+(M(θ,β) in ^*(0) such that
the closure of V_β is disjoint from M_β, and such that
V_β is the disjoint union of open sets
V_β,j, j=1,…,s, each of which
contains exactly one point from M_θ,β.
Choose a
closed neighbourhood V'_β of M_θ,β contained in V_β
and a smooth function
e_β:→[0,∞)
such that e_β=1 on V'_β and e_β=0 outside V_β. Set
λ^+:M(,β)→[0,∞), ↦ e_β(R^+())·τ_d().
lemma
There is a constant C<∞ such that for any ∈ M(,β)
satisfying ^-()+^+()>C one has ^-()=^+().
Suppose the lemma does not hold. Then one can find a sequence _n
in M(,β) such that ^-(_n)+^+(_n)→∞ and
^-(_n)≠^+(_n). After passing to a subsequence we may assume
that the sequence _n chain-converges. If the chain-limit lay in
(,β), or if the chain-limit involved factorization through
an irreducible critical point, then ^±(_n) would be bounded.
Therefore, the chain-limit must lie in
(,θ)×(θ,β) and, consequently,
^-(_n)=τ_d(_n)=^+(_n) for n≫0, a contradiction.□
In the course of the proof we also obtained the following:
lemma
For a chain-convergent sequence _n in M(,β) the following are
equivalent:
description
(i) λ^-(_n)→∞.
(ii) λ^+(_n)→∞.
(iii) The chain-limit of _n lies in
(,θ)×(θ,β).□
Since ^+ will not appear again in the text, we set
:=^-
to simplify notation. For any real number T set
_=T:={∈()=T}.
Given ∈ M(,β), one has R^-()∈ U_ if ()>0
(by definition of ), and R^+()∈ V_β if ()≫0
(by Lemma <ref>).
Therefore, if ()≫0 then there is a map
d:M(,β)_=T→(,θ)×(θ,β)
characterized by the fact that if d()=(_1,_2) then
R^-() and Ř^-(_1) lie in the same set U_,i, and
R^+() and Ř^+(_2) lie in the same set V_β,j.
Gluing theory (see <cit.>) provides the following result:
lemma
There is a T_0>0 such that for any T≥ T_0 the map
d× h×τ_a:
_=T→((,θ)×(θ,β))×3×
is a diffeomorphism.□
§.§ Proof of Proposition <ref>
Let ,β∈^*(Y) with
(β)-()≡58. To compute the
matrix coefficient (v_2v_3+v_3v_2),β we distinguish between two
cases. If ()≢48 the calculation will consist in counting
modulo 2 the number of ends of the 1-manifold 23(,β).
If ()≡48 then M(,β) may contain
sequences factoring through the trivial connection over Y. To deal
with this we consider the subspace of
M(,β)× consisting of points (,t) with
()≤ T for some large
T. By carefully cutting down this subspace to a 1-manifold and then
counting the number of ends and boundary points modulo 2 we obtain
eqn:v2v3chhom.
For s∈ we define the translation map
_s:→, (t,y)↦(t+s,y).
Part (I) Suppose ()≢48. Then
no sequence in M(,β) can have a chain-limit involving
factorization through the trivial connection.
We will determine the ends of the smooth 1-manifold 23(,β).
Let (_n,t_n) be a sequence in
23(,β). After passing to a subsequence we may assume that
the following hold:
description
(i) The sequence ^*_-t_n(_n)
converges over compact subsets of to some
^-∈ M(^-,β^-). (By this we mean
that there are connections
A_n,A̅ representing _n,^- respectively, such that
A_n→A̅ in C^∞ over compact subsets of .)
(ii) The sequence ^*_t_n(_n) converges over compact subsets of
to some ^+∈ M(^+,β^+).
(iii) The sequence t_n converges in [-∞,∞] to some point
t_∞.
Here, [-∞,∞] denotes the compactification of the real line obtained
by adding two points ±∞.
Suppose (_n,t_n) does not converge in 23(,β).
Case 1: t_∞ is finite. Then M(^-,β^-) has
dimension 4 and either ^-= or β^-=β. The corresponding
number of ends of 23(,β), counted modulo 2, is
(dϕ+ϕ d),β.
Case 2: t_∞=∞. Let n^± be the dimension of
M(^±,β^±). Because
s_1(^-[0])=0, s_2(^+[0])∧ s_3(^+[0])=0
we must have n^-≥3 and n^+≥2. On the other hand,
n^-+n^+≤ M(,β)=5,
so n^-=3, n^+=2. It follows that
=^-, β^-=^+, β^+=β.
The corresponding number of ends of 23(,β) is
v_2v_3,β modulo 2.
Case 3: t_∞=-∞. Arguing as in Case 2 one finds that the number
of such ends of 23(,β) is
v_3v_2,β modulo 2.
Since the total number of ends of 23(,β) must be zero modulo 2,
we obtain the equation eqn:v2v3chhom in the case
()≢48.
Part (II) Now suppose ()≡48.
We will again make use of a cut-off function b as in eqn:b-prop1 in
Subsection <ref>,
but we now impose two further conditions, namely
b(0)=1/2, b'(t)>0 for -1<t<1.
Set
c:×→, (,t)↦ b(t-τ_a()).
Choose generic 3×3 matrices A^+=(a^+_ij) and A^-=(a^-_ij) and
for j=1,2,3 define a section ρ_j of the bundle R^*
over M(,β)× by
ρ_j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijρ^+_i.
Define a function g:M(,β)×→[0,1] by
g(,t):=b(()-1)· b(τ^+()-t)· b(t-τ^-()).
For j=1,2,3 we now define a section s_j of R^* by
s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·ρ_j(,t).
defn
Let 23(,β) be the subspace of × consisting of those
points (,t) that satisfy the following conditions:
itemize
* s_1(,-t)=0,
* s_2(,t) and s_3(,t) are linearly dependent.
To understand the ends of 23(,β)
we will need to know that certain subspaces of M(,θ) and
M(θ,β), respectively, are “generically” empty.
These subspaces are defined as
follows. For ∈ M(,θ) and j=1,2,3 let
s_j():=(1-b(-τ^-()))· s_j([0])+b(-τ^-())
∑_ia^-_ijρ^-_i(,0),
and for ∈ M(θ,β) let
s_j():=(1-b(τ^+()))· s_j([0])+b(τ^+())
∑_ia^+_ijρ^+_i(,0).
Set
M_2(,θ) :={∈ M(,θ) s_2()∧ s_3()=0},
M_3(,θ) :={∈ M(,θ) s_1()=0}.
Replacing (,θ) by (θ,β) in the last two definitions
we obtain subspaces M_k(θ,β) of M(θ,β).
For k=2,3, each of the spaces M_k(,θ) and M_k(θ,β)
has expected dimension
1-k and is therefore empty for “generic” choices of sections s_j and
matrices A^±.
There is a constant C_0<∞ such that for all
(,t)∈23(,β) one has
|t|≤min(-τ^-(),τ^+())+C_0.
We must prove that both quantities |t|+τ^-() and
|t|-τ^+() are uniformly bounded above for (,t)∈23(,β).
The proof is essentially the same in both cases, so we will only spell it out
in the first case. Suppose, for contradiction, that (_n,t_n)
is a sequence in 23(,β)
with |t_n|+τ^-(_n)→∞.
After passing to a subsequence we may assume
that the sign of t_n is constant, so |t_n|=-et_n for some constant
e=±1. Then [et_n]→ by exponential decay
(see <cit.>), and
s_j(,et_n)=s_j(_n[et_n]) for n≫0.
If e=1 then this gives
0=s_2(_n[t_n])∧ s_3(_n[t_n])→ s_2()∧ s_3(),
as n→∞, whereas if e=-1 we get
0=s_1(_n[-t_n])→ s_1().
However, for “generic” sections s_j, both s_2()∧ s_3()
and s_1() are non-zero. This contradiction proves the lemma.
□
For any constant C_1<∞ there is constant L>0 such that for
all (,t)∈23(,β) satisfying ()≥ L one has
|t|≤min(-τ^-(),τ^+())-C_1.
Suppose to the contrary that there is a constant C_1<∞ and a
sequence (_n,t_n) in 23(,β) such that (_n)→∞
and
|t_n|>min(-τ^-(_n),τ^+(_n))-C_1.
After passing to a subsequence we may assume that at least one of the following
two conditions holds:
(i) |t_n|>-τ^-(_n)-C_1 for all n,
(ii) |t_n|>τ^+(_n)-C_1 for all n.
The argument is essentially the same in both cases, so suppose (i) holds. By
Lemma <ref> we also have
|t_n|≤-τ^-(_n)+C_0,
hence the sequence τ^-(_n)+|t_n| is bounded. Since
(_n)→∞ we have τ_d(_n)→∞, so
τ^+(_n)+|t_n|=τ_d(_n)+(τ^-(_n)+|t_n|)→∞.
After passing to a subsequence we may assume that
* the sequence _n chain-converges;
* the sequence τ^-(_n)+|t_n| converges to a real number;
* |t_n|=-et_n for some constant e=±1.
From Lemma <ref> we deduce that '_n:=^*_et_n_n
converges over compact subsets of to
some ∈ M(,θ). For large n we have c(_n,et_n)=0
and
g(_n,et_n)=b(et_n-τ^-(_n))=b(-τ^-('_n))→ b(-τ^-()).
For j=1,2,3 we now get
s_j(_n,et_n)→ s_j().
But then lies in
M_2(,θ) (if e=1) or in M_3(,θ) (if e=-1),
contradicting the fact that the latter two spaces are empty.□
Choose L≥2 such that for all (,t)∈23(,β) with
()≥ L one has
|t|≤min(-τ^-(),τ^+())-1,
which implies that s_j(,t)=ρ_j(,t). Set
23(,β):={(,t)∈23(,β)()≥ L}.
We will show that 23(,β) is transversely cut and therefore
a one-manifold with boundary, and determine the number of boundary
points and ends modulo 2. We will see that the number of ends is given by
the same formula as in Part (I), whereas the boundary points contribute the
new term ' of eqn:v2v3chhom.
Ends of 23(,β):
Let (_n,t_n) be a sequence in
23(,β). After passing to a subsequence we may assume that
(i),(ii), (iii) of Part (I) as well as the following hold:
description
(iv) The sequence _n is chain-convergent.
(v) The sequence τ_a(_n) converges in [-∞,∞].
(vi) Either (_n)>0 for all n, or (_n)=0 for all n.
Suppose (_n,t_n) does not converge in 23(,β).
Case 1: (_n)=0 for all n. Then g(_n,t_n)=0 and therefore
s_j(_n,t_n)=s_j(_n[t_n]).
This case is similar to Part (I) and the corresponding number of ends of
23(,β), counted modulo 2, is
(v_2v_3+v_3v_2+dϕ+ϕ d),β,
where ϕ is defined as before.
Case 2: (_n)>0 for all n. We show this is impossible.
By definition of the
chain-limit of _n must lie in (,β), so
τ_d(_n) is bounded. By Lemma <ref>, the sequence
τ^-(_n) is bounded above whereas τ^+(_n) is bounded below,
hence both sequences must be bounded.
Applying Lemma <ref>
again we see that t_n is bounded. Therefore, both sequences
τ_a(_n) and t_n converge in , so (_n,t_n) converges
in M(,β)× and hence in 23(,β),
which we assumed was not the case.
Boundary points of 23(,β): Let M=M(3,) be the space of
all 3×3 real matrices, and let U⊂ M be the open subset
consisting of those matrices B satisfying
B_1≠0, B_2∧ B_3≠0,
where B_j denotes the jth column of B. Then M∖ U is the union
of three submanifolds of codimension at least two, hence U is a connected
subspace and a dense subset of M. Let
F:3××× U× U →^3×^3×^3,
(H,v,w,B^+,B^-) ↦(F_1,F_2,F_3),
where
F_1 =(1-b(v))HB^-_1+b(v)B^+_1,
F_j =(1-b(w))HB^-_j+b(w)B^+_j, j=2,3.
Then F is a submersion, so F(0,0,0) is empty. Moreover, the set
Z:=F({0}× L(^3),
consisting of those points in the domain of F for which
F_1=0, F_2∧ F_3=0,
is a codimension 5 submanifold and a closed subset of
3×^2× U^2.
The projection π:Z→ U^2 is a proper map whose mod 2 degree is
_2(π)=1.
The equations eqn:FFF imply -1<v,w<1, hence π is
proper. To compute its degree,
let e_1,e_2,e_3 be the standard basis for ^3 and let B^± be given by
B^-_1=B^-_2=e_1, B^-_3=e_2,
B^+_1=-e_1, B^+_2=e_1, B^+_3=-e_2.
We show that the preimage
Z':=π(B^+,B^-) consists of precisely one point.
Suppose (H,v,w)∈ Z'. Because 0≤ b≤1, the equation F_1=0 implies
b(v)=1/2 and hence v=0, He_1=e_1, F_2=e_1. Because
He_2⊥ e_1, the vectors F_2,F_3 are linearly dependent if and only if
F_3=0, which yields w=0, He_2=e_2. Thus,
Z'={(I,0,0)},
where I is the identity matrix.
Using the fact that f(I,0,0)=(0,e_1,0)
and that the tangent space to L^*(^3) at (e_1,0) is
^3×{0}+ e_1 it is easy to see that the map
F( · , · , · ,B^+,B^-):3××→^9
is transverse to
{0}× L^*(^3) at (I,0,0), or equivalently, that (B^+,B^-)
is a regular value of π. This proves the claim.□
By Lemma <ref> we can identify
∂23(,β)=
(,θ)×(θ,β)×π(A^+,A^-),
where (H,v,w) corresponds to (h(),-t-τ_a(),t-τ_a()) for
(,t)∈∂23(,β).
Hence, for generic matrices A^± the number of boundary points of
23(,β), counted modulo 2, is ',β.
This completes the proof of Proposition <ref>.
□
§ TWO POINTS MOVING ON A CYLINDER, II
Let Y be an oriented homology 3–sphere.
In this section we will prove Proposition <ref>, which concerns
a certain cochain map
ψ:C^*(Y)→ C^*+5(Y)
appearing in the proof of additivity of .
We will continue using the notation
introduced in Section <ref>.
§.§ The cochain map ψ
We begin by recalling the definition of ψ in degrees
different from 4 mod 8 given in Subsection <ref>.
Let s_1,s_2 be "generic" sections of the canonical 3–plane bundle
→^*(Y[0]).
(Later we will impose further conditions on s_1,s_2.)
For any ,β∈^*(Y) set
33(,β):={(,t)∈× s_1([-t])=0=s_2([t])}.
If (,β)=5 and ()≢48 then
arguing as in Part (I) of the proof of Proposition <ref>
one finds that
33(,β) is a finite set. We define the matrix coefficient
ψ,β by
ψ,β:=#33(,β).
Recall that any "generic" section of defines a cup product
C^*(Y)→ C^*+3(Y) by the formula eqn:v3def. Let v_3 and v'_3
be the cup products defined by s_1 and s_2, respectively.
prop
For q≢3,48 one has
dψ+ψ d=v_3v'_3+v'_3v_3
as maps C^q(Y)→ C^q+6(Y).
Let ,∈^*(Y) with (,)=6 and
()≢3,48. Note that no sequence in M(,) can
have a chain-limit involving factorization through the trivial connection.
Now let (_n,t_n) be a sequence
in 33(,). After passing to a subsequence we may assume that
description
(i) The sequence ^*_t_n_n converges over compact subsets of
to some point ^+∈ M(^+,^+).
(ii) The sequence ^*_-t_n_n converges over compact subsets of
to some point ^-∈ M(^-,^-).
(iii) The sequence t_n converges in [-∞,∞] to some point
t_∞.
Clearly, s_1(^+[0]=0=s_2(^-[0]), hence (^±,^±)≥3.
Case 1: t_∞ finite. Then (^+,^+)=5 and either
^+= or ^+=. The corresponding number of ends of
33(,), counted modulo 2, is
(dψ+ψ d),.
Case 2: t_∞=∞. Then (^±,^±)=3, so
^-=, ^-=^+, and ^+=. The corresponding number of ends of
33(,) is v_3v'_3, modulo 2.
Case 3: t_∞=-∞. As in Case 2 one finds that the number of such
ends is v'_3v_3, modulo 2.
Since the total number of ends of 33(,) must be zero modulo 2,
we obtain the proposition.□
We now show that v_3=v'_3 if the sections s_1,s_2
are close enough in a certain
sense. To make this precise, we introduce the following
terminology: We will say a section s of has
Property 4 if for all
,β∈^*(Y) with (,β)≤4 the map
s_β:M(,β)→, ↦ s([0])
is transverse to the zero-section in .
Suppose s∈() has Property 4, and let be any
finite-dimensional linear
subspace of (). Then for any sufficiently small ∈
the following hold:
description
(i)The section s':=s+ has Property 4.
(ii)The sections s and s' define the same cup product
C^*(Y)→ C^*+3(Y).
Let (,β)=3.
Combining the transversality assumption with a compactness argument
one finds that the zero-set Z of s_β is a finite set.
Now observe that the map
equation
M(,β)×→, (,)↦(s+)([0])
is smooth, since has finite dimension. Therefore, given any
neighbourhood U of Z in M(,β) then the zero-set of
(s+)_β is contained in U for all sufficiently small .
The lemma now follows by applying the implicit function theorem to the map
eqn:sfrpmap.□
From now on we assume that s_1,s_2 are sufficiently close in the sense of the
lemma, so that in particular v_3=v'_3. Since we are taking coefficients
in /2, we deduce from Proposition <ref> that dψ=ψ d
in degrees different from 3 and 4 modulo 8.
We now extend the definition of ψ to degree 4.
Let ,β∈^*(Y) with
()=4 and (β)=1. To define ψ,β we use
the set-up of Subsections <ref> and <ref>
and define ρ_j, s_j for j=1,2
as in Subsection <ref>, where A^± should now be
generic 3×2 real matrices. In particular, we require that
A^± should have non-zero columns and that the angle between the columns
of A^+ should be different from the angle between the columns
of A^-. For any 3×2 real matrix B with non-zero columns B_j
set
ν(B):= B_1,B_2/B_1B_2,
using the standard scalar product and norm on ^3.
Then the above assumption on the angles means that ν(A^+)≠ν(A^-).
Now define
33(,β):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}.
prop
33(,β) is a finite set.
It is easy to see that Lemmas <ref>
and <ref> hold with 33(,β) in place of
23(,β). Arguing as in the proof of Proposition <ref>
one finds that for any L>0 there are only finitely many points
(,t)∈33(,β) with ()≤ L. Choose L≥2 such that
for all (,t)∈33(,β) with ()≥ L one has
|t|≤min(-τ^-(),τ^+())-1,
which implies that s_j(,t)=ρ_j(,t). We claim that
there are no such (,t). For suppose (,t)
is such an element and set
(H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3××.
Then for j=1,2 one has
(1-b(v_j))HA^-_j+b(v_j)A^+_j=0.
However, there is no solution (H,v_1,v_2) to these equations, since we
assume the columns A^±_j are non-zero and ν(A^+)≠ν(A^-).□
We define ψ in degree 4 by
ψ,β:=#33(,β).
prop
If the endomorphism ψ is defined in terms of “generic” sections s_1,s_2
that are sufficiently close then
dψ=ψ d
as maps C^*(Y)→ C^*+6(Y).
Although we could deduce this from Proposition <ref> below,
we prefer to give a direct proof, partly because the techniques involved
are also needed in the proof of Proposition <ref>.
It only remains to prove this in degrees 3 and 4 modulo 8. There is a
complete symmetry between these two cases because of
Lemma <ref>, so we will spell out the proof only in
degree 4. Let ,∈^*(Y) with ()=4, ()=2.
We will show that (dψ+ψ d),=0 by counting the ends of
a certain 1–dimensional submanifold
33(,) of M(,)×.
For any '∈(Y) we define a smooth function
:M(',)→
as follows.
For each β∈^1_Y let K_β be the union of all subsets
R^+(M(”,))⊂^*(Y[0]) where β≠”∈(Y) and
(”,)≤(β,),
where ( · , · ) is as in eqn:cs-al-beta.
Then K_β is compact. Choose a closed neighbourhood
W_β in ^*(Y[0]) of the finite set R^+(M(β,)) such that
W_β is disjoint from K_β, and a smooth function
f_β:^*(Y[0])→[0,1]
such that the following two conditions hold:
* W_β and W_β' are disjoint if β≠β';
* f_β=1 on a neighbourhood of R^+(M(β,)),
and f_β=0 outside W_β.
Set f:=1-∑_β f_β.
Let be the set of all β∈^1_Y such that
(',)>(β,)>0.
For ∈ M(',) and β∈ we
define τ^+_β()∈ implicitly by
_([τ^+_β()-2,∞))=(β,)+,
where the constant is as in Subsection <ref>, and set
():=f(R^+())·τ^+()+
∑_β f_β(R^+())·τ^+_β().
The function behaves under translation in the same way as
τ^±. Namely, for any real number s one has
(^*_s())=()-s.
For any ∈ M(',) let
() denote the restriction of to the band ().
For i=1,2,3 let i be the holonomy invariant section of
'β whose value at (,()) is ρ_i(()).
lemma
Let _n be a chain-convergent sequence in M(',).
If the last term of the chain-limit of _n lies in (β,)
for some β∈^*(Y) of index 1 then
(τ^+-)(_n)→∞,
otherwise the sequence (τ^+-)(_n) is bounded.
Because of the translationary invariance of τ^+- we may
assume that τ^+(_n)=0. Then _n converges over compact subsets of
to some element ∈ M(”,) representing the last term
in the chain-limit of _n. In fact, because no energy
can be lost at ∞ by the choice of , there are, for any real number
r, connections A_n,A representing _n,, respectively, such
that
A_n-A_L^p,w_1((r,∞)× Y)→0,
as follows from the exponential decay results of <cit.>.
Here, p,w are as in the definition of the space of connections
in Section <ref>.
Suppose first that β:=” is irreducible of index 1. Then
(_n)=τ^+_β(_n) for n≫0 and
(τ^+-τ^+_β)(_n)=-τ^+_β(_n)→∞,
proving the first assertion of the lemma.
Now suppose the sequence
(τ^+-)(_n) is not bounded. After passing to a subsequence we may
assume that there exists a β∈ such that for each n one has
R^+(_n)∈ W_β. Suppose, for contradiction, that ”≠β.
Since W_β is closed we must have R^+()∈ W_β
as well, hence
(”,)>(β,).
From eqn:anai we deduce that
τ^+_β(_n)→τ^+_β(),
so
(-τ^+)(_n)=τ^+_β(_n) is bounded. This contradiction shows
that ”=β.□
If _n is a sequence in M(',) which converges over compacta
to ∈ M(”,), where ”∈(Y) and
(”)≠1, then
(_n)→().
Let β∈^1_Y with (β,)>0.
If (”,)≤(β,) then R^+()∉W_β. Since
W_β is closed, we have R^+(_n)∉W_β for n≫0. This means
that β contributes neither to () nor to (_n) for
n≫0. If on the other hand (”,)>(β,) then
τ^+_β(_n)→τ^+_β().
From this the lemma follows.□
Let and be the real-valued functions on M(,) defined by
:=1/2(+τ^-), :=1/2(-τ^-).
Let
:M(,)→[0,∞), ↦ e_(R^-())·(),
where e_ is as in eqn:eal. As the following lemma shows,
the quantity () measures the extent to which
factors through the trivial connection θ over Y.
lemma
Let _n be a chain-convergent sequence in M(,).
If the first term of the chain-limit of _n lies in (,θ) then
(_n)→∞,
otherwise the sequence (_n) is bounded.
Because of the translationary invariance of we may assume
τ^-(_n)=0 for all n,
so that the sequence _n converges over compact subsets of to some
∈ M(,β), where β∈(Y). Then represents
the first term of the chain-limit of _n.
Part I. Suppose first that β=θ. We will show that
(_n)→∞.
There are two sequences 1,2 of real numbers such that
itemize
* ^*_1(_n) converges over compact subsets of to an
element of M(,θ).
* ^*_2(_n) converges over compact subsets of to an
element of M(θ,β'), where β' is an element of ^*(Y) which
is either equal to or has index 1.
* 2-1→∞.
Define the sequence r_n of real numbers implictly by
__n((-∞,r_n])=(,θ)+.
Then r_n<τ^+(_n) and r_n<τ^+_β(_n) for all β∈_,
hence r_n<(_n). For large n one therefore has
(_n)=(_n)-τ^-(_n)>r_n-τ^-(_n).
But
1-τ^-(_n), 2-r_n
are both bounded sequences and 2-1→∞, hence
(_n)>r_n-τ^-(_n)→∞.
Part II. Now suppose β is irreducible. We will show that
the sequence (_n) is bounded.
Case 1: β=. Then _n converges to in
M(,), hence (_n) is bounded.
Case 2: (,β)≤4. For large n one would then have
R^-(_n)∉U_, hence e_(R^-(_n))=0 and therefore
(_n)=0.
Case 3: (,β)=5, i.e. (β)=1.
For large n one would then have
R^+(_n)∈ W_β and therefore
(_n)=e_(_n[0])·τ^+_β(_n)
→ e_([0])·τ^+(),
so that (_n) is bounded in this case, too.□
Given '∈(Y), a real number d, and a real 3×2 matrix
A'=(a'_ij) of maximal rank we define two sections ζ_1,ζ_2 of
' by
ζ_j(,t):=b^+ j+(1-b^+)∑_i=1^3a'_ijρ^+_i,
where b^+:=b(τ^+--d). Here, and in the remainder of this
section, b:→ is a smooth function satisfying eqn:b-prop1
and eqn:b-prop2.
We will show that for '= and generic matrix A' the sections
ζ_1,ζ_2 are linearly independent at any point
(,t)∈ M(,)× with ()≫0. We begin by spelling
out sufficient conditions on A' under which this holds.
For any β∈^1_Y the finite set
(θ,β)×(β,) is in 1-1 correspondence with
the set of points (,')∈ M(θ,β)× M(β,)
satisfying
τ^+()=0=τ^+(').
(In other words, this is one way of fixing translation.) For each such pair
(,'), represented by a pair (A,A') of connections, say, the holonomy
of A along the path [0,∞)×{y_0} composed with the holonomy of
A' along (-∞,0]×{y_0} defines an isomorphism
_,':_[0]→_'[0].
For any real number r and j=1,2 let
η_j(r)=r·_,'(ρ_j([0]))+
(1-r)∑_i=1^3a'_ijρ_i('[0]).
Then the set
C:={r∈[0,1]η_1(r)∧η_2(r)=0}
has expected dimension 1-2=-1 and is empty for generic matrices A'.
Since (Y) is finite we conclude that for generic A', the set C
is empty for any β∈^1_Y and any
(,')∈ M(θ,β)× M(β,) satisfying ttom.
From now on we assume A' is chosen so that this holds.
lemma
Let A' be as described above.
If d>0 is sufficiently large then
the sections ζ_1,ζ_2 are linearly independent at every
point in M(θ,)×.
If the lemma were false then we could find a sequence d_n of real
numbers converging to ∞ and for each n an element
_n∈ M(θ,) such that ζ_1,ζ_2, defined with d_n
in place of d, are linearly dependent at (_n,t) for some (hence any)
t. Because A' has maximal rank and the assumptions on ensure that
ρ_1,ρ_2,ρ_3 are linearly independent at R^+(_n), we must have
b^+(_n)>0, i.e.
(τ^+-)(_n)>d_n-1,
which shows that (τ^+-)(_n)→∞. After passing to a subsequence
we can assume that the sequence _n is chain-convergent and that
b^+(_n) converges to some r∈[0,1]. By Lemma <ref>
the chain-limit lies in (θ,β)×(β,) for some
β∈^1_Y. Then the sequences
^*_τ^+(_n)(_n), ^*_τ^+_β(_n)(_n)
converge over compact subsets of to some ∈ M(θ,β) and
'∈ M(β,), respectively, and ttom holds. But then
η_1(r) and η_2(r) are linearly dependent, contradicting the assumption
on A'.□
From now on we assume that d is chosen so that the
conclusion of Lemma <ref> holds.
lemma
There is a constant T_1<∞ such that the sections ζ_1,ζ_2
are linearly independent at every point (,t)∈ M(,)× with
()>T_1.
Recall that if ζ_1,ζ_2 are linearly independent at
(,t) for some real number t then the same holds at (,t') for all
t'. Now suppose the lemma
were false. Then we could find a sequence in M(,) such that
()→∞ and ζ_1(,t),ζ_2(,t) are linearly
dependent for every n. We may also arrange that τ^+(_n)=0.
After passing to a subsequence we may assume that
is chain-convergent. From Lemma <ref> we see that
there are two possibilities for the chain-limit.
Case 1: The chain-limit of _n lies in
(,θ)×(θ,β)×(β,) for some
β∈^1_Y. Then (_n)=τ^+_β(_n) for n≫0.
Let ∈ M(θ,β) be a representative for the
middle term of the chain-limit. By Lemma <ref> we have
(τ^+-)(_n)→∞, so for t_n:=() one has
ζ_j(_n,t_n)→ρ_j(R^+()),
contradicting the fact that the ρ_j are linearly independent at
R^+().
Case 2: The chain-limit of _n lies in
(,θ)×(θ,). Then _n converges over compact
subsets of to some ∈ M(θ,) satisfying
τ^+()=0. According to Lemma <ref> we have
(_n)→(), so
ζ_j(_n,t)→ζ_j(,t)
for any t. Hence, ζ_1,ζ_2 must be linearly dependent at
(,t). But d was chosen so that the conclusion of
Lemma <ref> holds, so we have a contradiction.□
At any point (,t)∈ M(',)× where ζ_1,ζ_2
are linearly independent let ξ_1(,t),ξ_2(,t) be the
orthonormal pair of vectors in _[t] obtained by applying the
Gram-Schmidt process to ζ_1(,t) and ζ_2(,t), and let
ξ_3=ξ_1×ξ_2 be the fibrewise cross-product of ξ_1 and ξ_2.
Then {ξ_j(,t)}_j=1,2,3 is a positive orthonormal basis for
_[t].
We now have the necessary ingredients to define the cut-down moduli space
33(,). Set
c:M(,)×→[0,1], (,t)↦ b(t-())
and for j=1,2,3 define a section _j of the bundle _ over
M(,)× by
_j:=(1-c)∑_ia^-_ijρ^-_i+c∑_ia^+_ijξ_i.
Choose a constant T_1 for which the conclusion of Lemma <ref>
holds and define a function g:M(,)×→[0,1] by
g(,t):=b(()-T_1)· b(()-t)· b(t-τ^-()).
For j=1,2,3 we now define a section s_j of _ by
s_j(,t):=(1-g(,t))· s_j([t])+g(,t)·_j(,t).
Now set
33(,):={(,t)∈× s_1(,-t)=0, s_2(,t)=0}.
In the study of the ends of 33(,) we will encounter certain
subspaces of M(θ,) which we now define. For ∈ M(θ,)
and j=1,2 set
s_j():=(1-b(()))· s_j([0])
+b(())∑_i=1^3a^+_ijξ_i(,0)
and define
M_3;j(θ,):={∈ M(θ,) s_j()=0}.
This space has expected dimension 2-3=-1 and is empty for “generic”
choices of sections s_j and matrix A^+.
There is a constant C_0<∞ such that for all
(,t)∈33(,) one has
|t|≤min(-τ^-(),())+C_0.
That |t|+τ^-() is uniformly bounded above for
(,t)∈33(,) is proved in the same way as the corresponding part
of Lemma <ref>. To prove the same for |t|-(),
suppose there were a sequence (_n,t_n)∈33(,) with
|t_n|-(_n)→∞.
After passing to a subsequence we may assume the following.
* The sequence _n is chain-convergent;
* There is a constant e=±1 such that |t_n|=et_n for all n;
* The sequence et_n-τ^+(_n) converges in [-∞,∞] to
some point t.
Let j:=1/2(3+e). Then for n≫0 we have
0= s_j(_n,et_n)=s_j(_n[et_n]).
According to Lemma <ref> one of the following two cases
must occur.
Case 1: The sequence (τ^+-)(_n) is bounded. Then
et_n-τ^+(_n)→∞, so _n[et_n]→. By continuity of
s_j we must have s_j()=0, which however will not hold for a
“generic” section s_j.
Case 2: (τ^+-)(_n)→∞. From Lemma <ref>
we deduce that ^*_τ^+(_n)(_n) converges over compact subsets of
to some ∈ M(β,), where β∈^1_Y. Then
(_n)=τ^+_β(_n) for n≫0. Furthermore,
^*_τ^+_β(_n) converges over compacta to an element of some
moduli space M(',β), where β≠'∈(Y).
Case 2a: t=±∞. Then the exponential decay results of
<cit.> imply that
_n[et_n] converges to (if t=-∞) or to
(if t=∞). This is ruled out in the same way as Case 1.
Case 2b: t finite. Then ^*_et_n(_n) converges over compacta
to ':=^*_t()∈ M(β,), and _n[et_n]→'[0].
But then s_j('[0])=0, which will not hold for a “generic” section
s_j of the bundle , since M(β,) has dimension 1
whereas has rank 3.□
For any constant C_1<∞ there is constant L>0 such that for
all (,t)∈33(,) satisfying ()≥ L one has
|t|≤min(-τ^-(),())-C_1.
If not, then there would be a constant C_1<∞ and
a sequence (_n,t_n)∈33(,) with (_n)→∞
such that either
(i) |t_n|>-τ^-(_n)-C_1 for all n, or
(ii) |t_n|>(_n)-C_1 for all n.
Case (i) is rule out as in the proof of Lemma <ref>. Now
suppose (ii) holds. Because (_n)→∞ we have
(_n)→∞.
From Lemma <ref> we deduce that |t_n|-(_n) is bounded,
so
|t_n|-τ^-(_n)→∞.
This implies that c(_n,t_n)=1 for n≫0. After passing to a subsequence
we may assume that the sequence _n chain-converges and
|t_n|=-et_n for some constant e=±1.
Case 1: (τ^+-)(_n) is bounded. By Lemmas <ref>
and <ref> the chain-limit of _n must lie in
(,θ)×(θ,), so after passing to a subsequence
we may assume that '_n:=^*_et_n(_n) converges over compacta to some
∈ M(θ,). Using Lemma <ref> we obtain
g(_n,et_n)=b((_n)-et_n)=b(('_n))→ b(()).
Let j:=1/2(3+e). Then
0= s_j(_n,et_n)→ s_j().
But then lies in M_3;j(θ,), which is empty by choice of the
matrix A^+.
Case 2: (τ^+-)(_n)→∞. Then the chain-limit of _n
lies in (,θ)×(θ,β)×(β,) for
some β∈^1_Y. For large n we now have
(_n)=τ^+_β(_n) and ξ_j(_n,et_n)= j(_n,et_n),
j=1,2. After passing to a subsequence we may assume that
'_n:=^*_et_n(_n) converges over compacta to some
∈ M(θ,β). For large n we have
g(_n,et_n)=b(τ^+_β(_n)-et_n)=b(τ^+_β('_n))→
b(τ^+()).
Let j:=1/2(3+e). Then
0= s_j(_n,et_n)→(1-b(τ^+()))· s_j([0])
+b(τ^+())∑_ia^+_ijρ^+_i(,0).
Thus, lies in M_3(θ,β), which is empty by choice of A^+.
□
There is a constant L<∞ such that for all (,t)∈33(,)
one has ()<L.
For any (,t)∈33(,) with ()>T_1 let
h()∈3 be the matrix whose coefficients h_ij() are given by
ρ^-_j(,t)=∑_ih_ij()ξ_i(,t).
By Lemma <ref> there is an L≥ T_1+1 such that for all
(,t)∈33(,) with ()≥ L one has
|t|≤min(-τ^-(),())-1,
which implies that s_j(,t)=_j(,t). Given such a (,t),
the triple
(H,v_1,v_2):=(h(),-t-τ_a(),t-τ_a())∈3××
satisfies the equation
(1-b(v_j))HA^-_j+b(v_j)A^+_j=0.
for j=1,2. However, as observed in the proof of
Proposition <ref>, these equations have no solution
for generic matrices A^±.□
We will now prove Proposition <ref> in degree 4
by counting the number of ends of 33(,) modulo 2.
Ends of 33(,): Let (_n,t_n) be a sequence in
33(,). After passing to a subsequence we may assume that
the following hold:
(i) The sequences ^*_-t_n(_n) and ^*_t_n(_n)
converge over compact subsets of .
(ii) The sequence ^*_τ^-(_n)(_n) converges over compacta
to some ∈ M(,β), where β∈(Y).
(iii) The sequences t_n and τ^-(_n) converge in
[-∞,∞].
Suppose (_n,t_n) does not converge in 33(,).
Case 1: β=. We show this cannot happen. First observe that
the sequence (_n)
converges in . Since Lemma <ref> provides an upper bound
on τ^-(_n) and a lower bound on (_n) it follows that
both sequences must be bounded. Applying the same lemma again we see that
|t_n| is bounded. But then assumptions (ii) and (iii) imply that
(_n,t_n) converges in 33(,), which we assumed was not the
case.
Case 2: β irreducible, M(,β)≤4. Then
(_n)=0 for n≫0. As in the proof of
Proposition <ref> we find that the corresponding number of ends
of 33(,) is ψ d,.
Case 3: β irreducible, M(,β)=5. Then
(_n)=τ^+_β(_n) for n≫0, and
(_n)→τ_d().
As in Case 1 we see that the sequences τ^-(_n)
and t_n must be bounded, hence they both converge in by assumption (iii).
From (ii) we deduce that _n converges over compacta to some
'∈ M(,β) (related to by a translation).
By Lemma <ref> we have ξ_j(_n,t)= j(_n,t) for
n≫0 and any t, so
_j(_n,t)→_j(',t).
Setting t':=lim t_n we conclude that
(',t')∈33(,β). The corresponding number of ends of
33(,) is dψ,.□
§.§ Calculation of ψ
There are constants ^±∈/2 independent of Y and satisfying
^++^-=1 such that if ψ is defined in terms of “generic”
sections s_1,s_2
that are sufficiently close and e is the sign of
ν(A^+)-ν(A^-) then there is a homomorphism
Ξ:C^*(Y)→ C^*+4(Y) such that
ψ=v_3v_2+^e'+dΞ+Ξ d.
To be precise, if s'∈() satisfies Property 4 and
⊂() is any sufficiently large finite-dimensional linear
subspace then for any sufficiently small generic
(_0,_1)∈×
the conclusion of the proposition holds with s_j=s'+_j.
The above proposition completes the proof of Proposition <ref>
except for the order of v_2,v_3, which is insignificant in vue of
Proposition <ref>. (The order could be reversed by a small
change in the proof given below.)
Let ,β∈^*(Y) with (,β)=5.
Part (I) Suppose ()≢48.
For -3≤ y≤3 we define a section χ_y of by
6χ_y:=(3-y)s_1+(3+y)s_2.
In particular,
χ_-3=s_1, χ_3=s_2.
Let
:={z∈:|(z)|≤3, |z|≥1}
and let ':=/±1 be the surface-with-boundary obtained by identifying
each z∈ with -z. The image of a point z∈ in ' will be
denoted by [z].
Let ξ̅∈(), and let ξ̂ be a section of the bundle
× S^1 over ^*(Y[0])× S^1 satisfying
ξ̂(,-z)=-ξ̂(,z),
so that ξ̂∈_a() in the notation of
Section <ref>. We then define a section ξ of the
bundle × over ^*(Y[0])× as follows. Let
b_1(z):=b(|z|-2).
For ∈^*(Y[0]) and z=(x,y)∈ let
ξ(,z):=(1-b_1(z))·(ξ̅()+ξ̂(,z/|z|))
+b_1(z)χ_y().
Let f:→ be the smooth function given by
f(z):=b_1(z)(z).
Note that f(z)=(z) for |z|≥3, and f(z)=0 for |z|=1.
Moreover, f(-z)=-z.
(i) Let =(,β)
be the subspace of M(,β)×'
consisting of those points (,[z]) such that
ξ([f(z)],z)=0, ξ([f(-z)],-z)=0.
(ii) Let =(,β) be the subspace of
M(,β)× S^1×[0,∞)
consisting of those points (,z^2,r) such that z∈ S^1 and
ξ̂([-r],z)=0, ξ̅([r])=0.
If ξ̅ is “generic” and ξ̂ is given by a “generic” section
of ⊗ (see Lemma <ref>)
then will be a smooth
1–manifold-with-boundary. Now choose a section s'∈() satisfying
Property 4. If is a sufficiently large finite-dimensional
linear subspace of () and (_0,_1) a generic element of
× then taking s_j=s'+_j, j=1,2 the space will be
a smooth 1–manifold-with-boundary. (The reason that transversality can be
achieved over the boundary component of M(,β)×' given by
|z|=1 is essentially that if V is any real vector space then
every element of V× V can be written as (a+b,a-b) for suitable
a,b∈ V.) If in addition _0,_1 are
sufficiently small then for -3≤ y≤3 the section χ_y
will satisfy Property 4 and define the same cup product
v_3:C^*(Y)→ C^*+3(Y) as s', by Lemma <ref>.
The part of the boundary of given by |z|=1 can be identified with
the boundary of (defined by r=0). To see this, let
(,z)∈ M(,β)× and set _0:=[0]. Then
(,[z])∈ if and only if
ξ̅(_0)+ξ̂(_0,z)=0=ξ̅(_0)-ξ̂(_0,z),
which in turn is equivalent to (,z^2,0)∈.
This allows us to define a topological 1–manifold-with-boundary
=(,β) as a quotient
of the disjoint union ∐ by identifying each boundary point
of with the corresponding boundary point of .
The proposition will be proved by counting the ends and boundary points
of modulo 2. Before doing this, we pause to define the homomorphism
Ξ. Let ',β'∈^*(Y) with (',β')=4.
Replacing (,β) by (',β') in Definition <ref>
yields zero-dimensional manifolds _j(',β'), j=1,2.
The argument that we will give below to determine the ends of _j(,β)
can also be applied to show that _j(',β') is compact.
Granted this, we define Ξ:=Ξ_1+Ξ_2, where Ξ_j has matrix coefficient
Ξ_j',β':=#_j(',β').
Ends of (,β): Let (_n,[z_n]) be a sequence in
(,β), where z_n=(x_n,y_n)∈^2. After passing to a subsequence
we may assume that
description
(i) The sequence ^*_-x_n(_n)
converges over compact subsets of to some
^-∈ M(^-,β^-).
(ii) The sequence ^*_x_n(_n) converges over compact subsets of
to some ^+∈ M(^+,β^+).
(iii) The sequence (x_n,y_n) converges in
[-∞,∞]×[-3,3] to some point (x,y).
Suppose (_n,[z_n]) does not converge in (,β).
Case 1: x finite. Then (^+,β^+)=4 and either
^+= or β^+=β. The corresponding number of ends of
(,β) is (dΞ_1+Ξ_1d),β modulo 2.
Case 2: x=±∞. Then for n≫0 one has
0=ξ([± x_n],± z_n)→χ_± y(^±[0]).
Hence χ_± y(^±[0])=0. Since
χ_± y satisfy Property 4 we must have (^±,β^±)≥3,
so
5=(,β)≥(^-,β^-)+(^+,β^+)≥6.
This contradiction shows that there are no ends in the case x=±∞.
Ends of (,β): We argue as in part (I) of the proof of
Proposition <ref>. Let (_n,z_n^2,r_n) be a sequence in
(,β).
After passing to a subsequence we may assume that r_n converges in
[0,∞] to some point r. Then the number of ends modulo 2
corresponding to r<∞ is (dΞ_2+Ξ_2d),β.
Using Proposition <ref> and Lemma <ref> we see that
the number of ends corresponding to r=∞ is v_3v_2,β.
Boundary points of (,β): These are the points
(,[z]) in M(,β)×' where (z)=3 and
0=ξ([x],z)=s_2([x]), 0=ξ([-x],-z)=s_1([-x]).
The number of such points is by definition ψ,β.
Since the number of ends plus the number of boundary point of must
be zero modulo 2 we obtain the equation eqn:psi-v3v2 in the
case ()≢42.
Part (II) Suppose ()≡48. We define maps
V^±:[-3,3]→^3 by
6V^±(y):=(3-y)A^±_1
+(3+y)A^±_2.
Choose generic elements
L̅^±∈^3 and functions L̂^±:S^1→^3 satisfying
L̂^±(-z)=-L̂^±(z) for z∈ S^1. We define maps
L^±:→^3 by
L^±(z):=(1-b_1(z))· (L̅^±+L̂^±(z/|z|))+
b_1(z)· V^±((z)),
where the function b_1 is as in eqn:b1def. Let (,β)
be the vector bundle over × obtained by pulling back
the bundle →^*(Y[0]) by the map
×→^*(Y[0]), (,z)↦[f(z)].
Let c and g be the functions defined in
eqn:c23def and eqn:gomt, respectively.
We define sections ,s of (,β) by
(,z):=(1-c(,f(z)))∑_i=1^3L^-_i(z)ρ^-_i(,f(z))
+c(,f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)),
s(,z):=(1-g(,f(z)))·ξ([f(z)],z)+g(,f(z))·(,z).
Let =(,β) be the subspace of ×' consisting
of those points (,[z]) such that
s(,z)=0, s(,-z)=0.
We define sections ,s̅ of the bundle (,β)
over × by
(,r):=(1-c(,r))∑_i=1^3L̅^-_iρ^-_i(,r)
+c(,r)∑_i=1^3L̅^+_iρ^+_i(,r),
s̅(,r):=(1-g(,r))·ξ̅([r])+g(,r)·(,r).
Let (,β) be the vector bundle over
× S^1× obtained by pulling back the bundle
by the map
× S^1×→ Y[0], (,z,r)↦[r].
We define sections ,ŝ of (,β) by
(,z,r):=(1-c(,r))∑_i=1^3L̂^-_i(z)ρ^-_i(,r)
+c(,r)∑_i=1^3L̂^+_i(z)ρ^+_i(,r),
ŝ(,z,r):=(1-g(,r))·ξ̂([r],z)+g(,r)·(,z).
Note that ŝ(,-z,r)=-ŝ(,z,r).
Let =(,β) be the subspace of
× S^1×[0,∞)
consisting of those points (,z^2,r) such that z∈ S^1 and
ŝ(,z,-r)=0, s̅(,r)=0.
By inspection of the formulas involved one finds that for |z|=1 one has
(,0)+(,z,0) =(,z),
s̅(,0)+ŝ(,z,0) =s(,z).
Therefore, the part of the boundary of given by |z|=1 can be
identified with the boundary of (defined by r=0). By gluing
and correspondingly
we obtain a topological 1–manifold-with-boundary .
There is a constant C_0<∞ such that for all
(,[z])∈ one has
|f(z)|≤min(-τ^-(),τ^+())+C_0.
The proof is similar to that of Lemma <ref>.
We must provide upper bounds on both quantities |f(z)|+τ^-() and
|f(z)|-τ^+() for (,[z])∈.
The proof is essentially the same in both cases, so we will only spell it out
in the second case. Suppose, for contradiction, that (_n,[z_n])
is a sequence in with |f(z)|-τ^+(_n)→∞.
By perhaps replacing z_n by -z_n we can arrange that
(z_n)≥0. Then f(z_n)≥0 as well,
and g(_n,f(z_n))=0 for n≫0. Let z_n=(x_n,y_n).
After passing to a subsequence we may assume
that z_n converges in [0,∞]×[-3,3] to some
point (x,y).
Case 1: x finite. Let z:=(x,y)∈. The sequence _n
converges to over compact subsets of , so
for large n we have
0=ξ(_n[f(z_n)],z_n)→ξ(,z).
However, the space of all w∈ for which ξ(,w)=0 has
expected dimension 2-3=-1, so this space is empty for “generic”
sections s_1,s_2,ξ̅,ξ̂. Hence, x cannot be finite.
Case 2: x=∞. Then f(z_n)=x_n for large n.
Now, ^*_x_n_n converges over compacta
to , so for large n we have
0=ξ(_n[x_n],z_n)=χ_y_n(_n[x_n])→χ_y().
However, the space of all t∈[-3,3] for which χ_t()=0 has
expected dimension 1-3=-2, so this space is empty for “generic”
sections s_1,s_2. Hence, x≠∞.
This contradiction proves the lemma.□
In the proof of Lemma <ref> below
we will encounter certain limits
associated to sequences in with chain-limits in
(,θ)×(θ,β). These limits lie in
cut down moduli spaces analogous to those introduced in
Definitions <ref> and <ref>,
with M(,θ) or M(θ,β)
in place of . We now define these cut-down spaces in the case of
M(θ,β) and observe that they are “generically” empty.
The case of M(,θ) is similar.
For any (,z)∈× let
s(,z):= (1-b(τ^+()-f(z)))·ξ([f(z)],z)
+b(τ^+()-f(z))∑_i=1^3L^+_i(z)ρ^+_i(,f(z)).
Let (θ,β) be the subspace of M(θ,β)×'
consisting
of those points (,[z]) such that
s(,z)=0, s(,-z)=0.
Then (θ,β) has expected dimension 3-6=-3 and is
empty for “generic” sections s_1,s_2,ξ̅,ξ̂ and generic choices of
A^+,L̅^+,L̂^+.
Let (θ,β) be the subspace of
M(θ,β)×[-3,3] consisting of those points (,y) such that
(1-b(τ^+()))·χ_y([0])
+b(τ^+())∑_iV^+_i(y)ρ^+_i(,0)=0.
We observe that the space (θ,β) (a parametrized version of
the space M_3(θ,β) defined in
Subsection <ref>)
has expected dimension 2-3=-1 and is
empty for “generic” sections s_1,s_2 and generic matrix
A^+.
For any constant C_1<∞ there is constant L>0 such that for
all (,[z])∈ satisfying ()≥ L one has
|f(z)|≤min(-τ^-(),τ^+())-C_1.
The proof is similar to that of Lemma <ref>. If the lemma
did not hold there would be a sequence (_n,[z_n]) in such that
(_n)→∞ and one of the following two conditions hold:
(i) |f(z_n)|>-τ^-(_n)-C_1 for all n,
(ii) |f(z_n)|>τ^+(_n)-C_1 for all n.
Suppose (ii) holds, the other case being similar.
By replacing z_n by -z_n, if necessary, we can arrange that (z_n)≥0.
From Lemma <ref> we deduce that the sequence
f(z_n)-τ^+(_n) is bounded, whereas
f(z_n)-τ^-(_n)→∞.
For large n we therefore have
c(_n,f(z_n))=1, g(_n,f(z_n))=b(τ^+(_n)-f(z_n)).
Let z_n=(x_n,y_n).
After passing to a subsequence we may assume that
* '_n:=^*_x_n_n converges over compact subsets of to
some '∈ M(θ,β);
* z_n converges in [0,∞]×[-3,3] to some point z=(x,y).
Case 1: x finite. Then _n converges over compacta to some
∈, and
0=s(_n,z_n)→ s(,z).
Beause the sequence z_n is bounded, we also have c(_n,f(-z_n))=1 for
large n, so
0=s(_n,-z_n)→ s(,-z).
But then (,[z]) belongs to (θ,β), contradicting the fact
that that space is empty.
Case 2: x=∞. Since
τ^+('_n)=τ^+(_n)-x_n,
we obtain
g(_n,f(z_n))=b(τ^+('_n)) for n≫0.
Therefore,
0=s(_n,z_n)→
(1-b(τ^+(')))·χ_y('[0])
+b(τ^+('))∑_iV^+_i(y)ρ^+_i(',0).
But this means that (',y) belongs to (θ,β), which is
empty.
This contradiction proves the lemma.□
There is a constant C_0<∞ such that for all
(,z^2,r)∈ one has
r≤min(-τ^-(),τ^+())+C_0.
This is similar to the proof of Lemma <ref>.□
For any constant C_1<∞ there is constant L>0 such that for
all (,z^2,r)∈ satisfying ()≥ L one has
r≤min(-τ^-(),τ^+())-C_1.
This is similar to the proof of Lemma <ref>.□
Choose L≥2 such that the conclusions of Lemmas <ref>
and <ref> hold with C_1=1. For all (,[z]∈
with ()≥ L we then have
s(,z)=(,z),
and for all (,z^2,r)∈ with ()≥ L we have
ŝ(,z,-r)=(,z,-r), s̅(,r)=(,r).
From Lemma <ref> it follows that
L is a regular value of the real functions on
and defined by . Therefore,
:={(,[z])∈()≤ L},
:={(,z^2,r)∈()≤ L}
are smooth 1–manifolds-with-boundary, and
^L:=∪
is a topological 1–manifolds-with-boundary. (As before we identify the
part of given by |z|=1 with the part of given by r=0.)
Ends of ^L: From Lemma <ref> we deduce that every
sequence (_n,[z_n]) in which satisfies (_n)>0 has a
convergent subsequence. Similarly, it follows from Lemma <ref>
that every sequence (_n,z_n^2,r_n) in with (_n)>0 has a
convergent subsequence. (See the proof of Proposition <ref>,
“Ends of 23(,β)”, Case 2.) Therefore, all ends of ^L
are associated with sequences on which =0.
The number of such ends,
counted modulo 2, is given by the same formula as in Part (I), namely
(v_3v_2+dΞ+Ξ d),β.
Boundary points of ^L: The boundary of ^L decomposes as
^L=∪'∪,
where and are the parts of the boundaries of
and , respectively, given by ()=L, and '
is the part of the boundary of given by (z)=±3.
By choice of matrices A^± there are no points (,t)∈33(,β)
with ()≥ L, hence W'_=33(,β) and
#'=ψ,β.
By Lemma <ref> we can identify
=(,θ)×(θ,β)×, =(,θ)×(θ,β)×,
where is the set of points (H,τ,[z]) in
3××' satisfying
(1-b(f(z)-τ))HL^-(z)+b(f(z)-τ)L^+(z),
(1-b(f(-z)-τ))HL^-(-z)+b(f(-z)-τ)L^+(-z),
whereas is the set of points (H,τ,z^2,r) in
3×× S^1×[0,∞) satisfying
(1-b(-r-τ))HL̂^-(z)+b(-r-τ)L̂^+(z)=0,
(1-b(r-τ))HL̅^-+b(r-τ)L̅^+=0.
Here, (H,τ) corresponds to (h(),τ_a()).
It follows from these descriptions that
#(∪)=',β,
where =#(∪)∈/2 is independent of the manifold Y.
To prove the theorem it only remains to understand the dependence of
on the pair of matrices A=(A^+,A^-). To emphasize the dependence on A
we write =(A) and =(A). The space is
independent of A. The part of corresponding to |z|=1 is also
independent of A and is empty for generic L̅,L̂ for dimensional
reasons.
Let P denote the space of all pairs (B^+,B^-) of 3×2 real
matrices with non-zero columns B^±_j. Let
P^±:={(B^+,B^-)∈ P±(ν(B^+)-ν(B^-))>0},
where ν is as in eqn:nuB. Note that each of P^+,P^- is homotopy
equivalent to S^2× S^2 and therefore path connected.
For any smooth path C:[0,1]→ P we define
:=⋃_0≤ t≤1(C(t))×{t}⊂3××'×[0,1].
As observed above there are no points (H,τ,[z],t) in with |z|=1.
Since b_1(z)>0 for |z|>1 we can therefore make regular
(i.e. transversely cut out) by varying C alone. If is regular
then it is a compact 1–manifold-with-boundary, and
=(C(0))∪(C(1))∪ X_C,
where X_C is the set of points (H,τ,x,t) in
3×××[0,1] satisfying the two equations
(1-b(x-τ))HC^-_1(t)+b(x-τ)C^+_1(t)=0,
(1-b(-x-τ))HC^-_2(t)+b(-x-τ)C^+_2(t)=0.
It follows that
(C(0))+(C(1))=#X_C.
If A,B∈ P^+ then we can find a path C:[0,1]→ P^+ from A to B.
Then X_C is empty. By perturbing C(t) for 0<t<1 we can arrange that
is regular. This yields (A)=(B). The same holds if
A,B∈ P^-.
Let ^± be the value that takes on P^±. To compute
^++^-, let (e_1,e_2,e_3) be the standard basis for ^3 and define
C:[0,1]→ P by
-C^+_1(t) =C^-_1(t):=e_1,
-C^+_2(t) :=(1-t)e_1+te_2,
C^-_2(t) :=(1-t)e_2+te_1.
Then C(0)∈ P^+ and C(1)∈ P^-. Moreover,
X_C consists of the single point
(I,0,0,1/2), and this point is regular. (Here I is the identity matrix.)
If we perturb C a little in order
to make regular then X_C will still consist of a single, regular point.
We conclude that
^++^-=#X_C=1.
This completes the proof of the proposition.□
§ INSTANTONS REDUCIBLE OVER OPEN SUBSETS
The following proposition is implicit in <cit.> but we include a
proof for completeness.
Let X be an oriented connected Riemannian 4–manifold and E→ X
an oriented Euclidean 3–plane bundle. Suppose A is a non-flat
ASD connection in E which restricts to a reducible connection over some
non-empty open set in X. Then there exists a rank 1 subbundle of E
which is preserved by A.
This is a simple consequence of the unique continuation argument in the
proof of <cit.>. The proof has two parts: local existence
and local uniqueness.
(i) Local existence. By unique continuation, every point in X has a connected
open neighbourhood V such that A|_V is reducible, i.e. there exists
a non-trivial automorphism u of E|_V such that ∇_Au=0. The
1–eigenspace of u is then a line bundle preserved by A.
(ii) Local uniqueness. Because A is not flat, it follows from
unique continuation that the set of points in X where F_A=0 has empty
interior. Now let V be any non-empty connected open set in X and suppose
A preserves a rank 1 subbundle ⊂ E|_V. We show that is
uniquely determined. Let x∈ V be a point where F_A≠0. By the
holonomy description of curvature
(see <cit.>) we can find a loop
in V based at x such that the holonomy _(A) of A along
is close to but different from the identity. The 1–eigenspace of
_(A) is then 1–dimensional and must agree with the fibre _x.
If x' is an arbitrary point in V then there is a similar description of
_x' in terms of the holonomy of A along a loop obtained by
conjugating with a path in V from x to x'.
□
§ UNIQUE CONTINUATION ON A CYLINDER
As in Subsection <ref> let
Y be a closed oriented connected 3-manifold and P→ Y an
3 bundle. If Y is not an integral homology sphere then we assume
P is admissible.
Let J⊂ be an open interval.
We consider the perturbed ASD equation for connections
in the bundle J× P→ J× Y obtained by
adding a holonomy perturbation to the Chern-Simons function. For a connection
A in temporal gauge the equation takes the form
A_t/ t=-*F(A_t)+V(A_t),
where A_t is the restriction of A to the slice {t}× P and
V is the formal gradient of the perturbation.
The following proposition is probably well known among experts, but we
include a proof for completeness.
Suppose A,A' are perturbed ASD connections in the bundle
J× P→ J× Y. If A and A' are in temporal gauge and
A_T=A'_T for some T∈ J, then A=A'.
We will apply (an adaption of)
the abstract
unique continuation theorem in <cit.>. To this end, fix an arbitrary
connection B in P and let
c_t=A_t-A'_t, a_t=A_t-B, a'_t=A'_t-B.
We have
F(A_t)=F(B)+d_Ba_t+a_t∧ a_t
and similarly for A'_t, so
c_t/ t+*d_Bc_t=-*(a_t∧ c_t+c_t∧ a'_t)
+V(A_t)-V(A'_t).
By <cit.> we have
V(A_t)-V(A'_t)_L^2≤c_t_L^2,
hence
c_t/ t+*d_Bc_t_L^2≤ϕ(t)c_t_L^2
where
ϕ(t)=(a_t_∞+a'_t_∞+1).
Because *d_B is a formally self-adjoint operator on 1–forms on Y and
ϕ is locally square integrable (in fact, continuous), we deduce
from <cit.> that for any
compact subinterval [t_0,t_1] of J
there are constants C_0,C_1 such that for t_0≤ t≤ t_1 one has
c_t_L^2≥c_t_0_L^2·exp(C_0t+C_1).
(<cit.> considers the case when c_t is defined for 0≤ t<∞, but
the approach works equally well in our case.)
Taking t_1=T we obtain c_t=0 for
t<T. Replacing c_t by c_-t we get c_t=0 for
t>T as well.□
10
AS1
M. F. Atiyah and I. M. Singer.
The index of elliptic operators: I.
Ann. of Math., 87:484–530, 1968.
BD1
P. J. Braam and S. K. Donaldson.
Floer's work on instanton homology, knots and surgery.
In H. Hofer, C. H. Taubes, A. Weinstein, and E. Zehnder, editors,
The Floer Memorial Volume, pages 195–256. Birkhäuser, 1995.
DHST1
I. Dai, J. Hom, M. Stoffregen, and L. Truong.
An infinite-rank summand of the homology cobordism group.
arXiv:1810.06145.
D1
S. K. Donaldson.
An application of gauge theory to four dimensional topology.
J. Diff. Geom., 18:279–315, 1983.
D2
S. K. Donaldson.
The orientation of Yang–Mills moduli spaces and 4–manifold
topology.
J. Diff. Geom., 26:397–428, 1987.
D5
S. K. Donaldson.
Floer Homology Groups in Yang–Mills Theory.
Cambridge University Press, 2002.
DK
S. K. Donaldson and P. B. Kronheimer.
The Geometry of Four-Manifolds.
Oxford University Press, 1990.
Miller-Eismeier1
M. Miller Eismeier.
Equivariant instanton homology.
arXiv:1907.01091.
FS2
R. Fintushel and R. J. Stern.
Definite 4–manifolds.
J. Diff. Geom., 28:133–141, 1988.
F1
A. Floer.
An instanton invariant for 3–manifolds.
Comm. Math. Phys., 118:215–240, 1988.
Fr0
K. A. Frøyshov.
On Floer homology and 4–manifolds with boundary, 1995.
D.Phil. thesis, University of Oxford.
Fr1
K. A. Frøyshov.
The Seiberg–Witten equations and four-manifolds with boundary.
Math. Res. Lett., 3:373–390, 1996.
Fr3
K. A. Frøyshov.
Equivariant aspects of Yang–Mills Floer theory.
Topology, 41:525–552, 2002.
Fr7
K. A. Frøyshov.
An inequality for the h–invariant in instanton Floer theory.
Topology, 43:407–432, 2004.
Fr13
K. A. Frøyshov.
Compactness and gluing theory for monopoles, volume 15 of
Geometry & Topology Monographs.
Geometry & Topology Publications, 2008.
Fr4
K. A. Frøyshov.
Monopole Floer homology for rational homology 3–spheres.
Duke Math. J., 155:519–576, 2010.
Fr14
K. A. Frøyshov.
4–manifolds and intersection forms with local coefficients.
J. Diff. Geom., 91:233–259, 2012.
Hirsch
M. W. Hirsch.
Differential Topology.
Springer, 1976.
HM
D. Husemoller and J. Milnor.
Symmetric Bilinear Forms.
Springer-Verlag, 1973.
Kotsch1
D. Kotschick.
SO(3)–invariants for 4-manifolds with b_2^+=1.
Proc. London Math. Soc., 63(3):426–448, 1991.
KM3
P. B. Kronheimer and T. S. Mrowka.
Embedded surfaces and the structure of Donaldson's polynomial
invariants.
J. Diff. Geom., 41:573–734, 1995.
KM5
P. B. Kronheimer and T. S. Mrowka.
Monopoles and Three-Manifolds.
Cambridge University Press, 2007.
KM7
P. B. Kronheimer and T. S. Mrowka.
Knot homology groups from instantons.
J. Topology, 4:835–918, 2011.
Jeffrey-Lee-Manifolds-DG
Jeffrey M. Lee.
Manifolds and Differential Geometry.
AMS, 2009.
NST1
Y. Nozaki, K. Sato, and M. Taniguchi.
Filtered instanton Floer homology and the homology cobordism group.
arXiv:1905.04001.
Ogawa
H. Ogawa.
Lower bounds for solutions of differential inequalities in Hilbert
space.
Proc. AMS, 16:1241–1243, 1965.
OS6
P. S. Ozsváth and Z. Szabó.
On the Floer homology of plumbed three-manifolds.
Geometry & Topology, 7:185–224, 2003.
Scaduto2
Ch. W. Scaduto.
On definite lattices bounded by a homology 3–sphere and
Yang-Mills instanton Floer theory.
arXiv:1805.07875.
Scaduto1
Ch. W. Scaduto.
Instantons and odd Khovanov homology.
J. Topology, 8(3):744––810, 2015.
University of Oslo, Norway
Email: [email protected]
|
http://arxiv.org/abs/2307.04338v1 | 20230710043023 | Privacy-Preserving Graph Machine Learning from Data to Computation: A Survey | [
"Dongqi Fu",
"Wenxuan Bao",
"Ross Maciejewski",
"Hanghang Tong",
"Jingrui He"
] | cs.LG | [
"cs.LG",
"cs.CR"
] |
Privacy-Preserving Graph Machine Learning
from Data to Computation: A Survey
Dongqi FuFirst two authors contribute equally to this research.^†, Wenxuan Bao^†, Ross Maciejewski^, Hanghang Tong^†, Jingrui He^†
^†University of Illinois Urbana-Champaign
^Arizona State University
[email protected], [email protected], [email protected], [email protected], [email protected]
August 12, 2023
=============================================================================================================================================================================================================================================================================================================================
In graph machine learning, data collection, sharing, and analysis often involve multiple parties, each of which may require varying levels of data security and privacy. To this end, preserving privacy is of great importance in protecting sensitive information.
In the era of big data, the relationships among data entities have become unprecedentedly complex, and more applications utilize advanced data structures (i.e., graphs) that can support network structures and relevant attribute information. To date, many graph-based AI models have been proposed (e.g., graph neural networks) for various domain tasks, like computer vision and natural language processing.
In this paper, we focus on reviewing privacy-preserving techniques of graph machine learning. We systematically review related works from the data to the computational aspects.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information (e.g., graph model parameters) to realize the optimization-based computation when data sharing among multiple parties is risky or impossible.
In addition to discussing relevant theoretical methodology and software tools, we also discuss current challenges and highlight several possible future research opportunities for privacy-preserving graph machine learning. Finally, we envision a unified and comprehensive secure graph machine learning system.
§ INTRODUCTION
According to the recent report from the United Nations [<https://press.un.org/en/2022/sc15140.doc.htm>], strengthening multilateralism is indispensable to solve the unprecedented challenges in critical areas, such as hunger crisis, misinformation, personal identity disclosure, hate speech, targeted violence, human trafficking, etc.
Addressing these problems requires collaborative efforts from governments, industry, academia, and individuals. In particular, effective and efficient data collection, sharing, and analysis are at the core of many decision-making processes, during which preserving privacy is an important topic.
Due to the distributed, sensitive, and private nature of the large volume of involved data (e.g., personally identifiable information, images, and video from surveillance cameras or body cameras), it is thus of great importance to make use of the data while avoiding the sharing and use of sensitive information.
On the other side, in the era of big data, the relationships among entities have become remarkably complicated. Graph, as a relational data structure, attracts much industrial and research interest for its carrying complex structural and attributed information. For example, with the development of graph neural networks, many application domains have obtained non-trivial improvements, such as computer vision <cit.>, natural language processing <cit.>, recommender systems <cit.>, drug discovery <cit.>, fraud detection <cit.>, etc.
Within the trend of applying graph machine learning methods to systematically address problems in various application domains, protecting privacy in the meanwhile is non-neglectable <cit.>. To this end, we consider two complementary strategies in this survey, namely, (1) to share faithfully generated graph data instead of the actual sensitive graph data, and (2) to enable multi-party computation without graph data sharing. Inspired by the above discussion, we focus on introducing two fundamental aspects of privacy-preserving techniques on graphs, i.e., privacy-preserving graph data and graph data privacy-preserving computation.
For the data aspect, privacy-preserving graph data as shown in Figure <ref>, we focus on the scenario that when publishing or sharing the graph data is inevitable, how could we protect (e.g., mask, hide, or perturb) sensitive information in the original data to make sure that the published or shared data could survive from the external attackers (e.g., node identify disclosure and link re-identification). Hence, in Section 2, we systematically introduce various attackers [Throughout the paper, we use “attackers” to denote the attacks on graphs. There are also attackers that are designed not for graphs but for Euclidean data, for example. Those are not in the scope of this paper.] first (Subsection 2.1) and what backgroud knowledge they need to execute attacks (Subsection 2.2). Then, we introduce the corresponding protection mechanisms and explain why they can address the challenges placed by attackers (Subsection 2.3). Also, we share some graph statistical properties (other than graph data itself) privacy protection mechanisms (Subsection 2.4). After that, we list several possible challenges for privacy-preserving graph data generation when facing complex structures and attributes, e.g., time-evolving graphs and heterogeneous information graphs (Subsection 2.5).
For the computation aspect, graph data privacy-preserving computation, we focus on the multi-party computation scenario where the input data is structured, distributed over clients, and exclusively stored (i.e., not shareable among others). Here, federated learning can be a quick-win solution. However, relational data structures (i.e., graphs) bring a significant challenge (i.e., non-IIDness) to the traditional federated learning setting. This means that the data from intra-clients and/or inter-clients can violate the independent and identically distributed assumption (i.e., the i.i.d. assumption) due to the presence of the complex graph features, whose data complexity hinders many existing federated learning frameworks from getting the optimal performance. Motivated by this observation, in Section 3, we first discuss the adaption of federated learning on graphs and the corresponding challenge from non-IIDness brought by graphs (Subsection 3.1), then we introduce how nascent graph federated learning research works to address the non-IIDness issues from three levels, i.e., graph-level federated learning (Subsection 3.2), subgraph-level (Subsection 3.3), and node-level (Subsection 3.4). Then, we list several challenges and promising research directions, including model heterogeneity and avoiding cross-client transmission (Subsection 3.5).
After we introduce privacy-preserving graph data and graph data privacy-preserving computation with their own methodologies, advances, software tools, limitations, and future directions. In Section 4, we envision the necessity of combing these two directions into privacy-preserving graph data privacy-preserving computation to meet any possibility of leaking sensitive information, to further achieve a comprehensive, well-defined, and end-to-end graph machine learning system. Finally, the paper is concluded in Section 5.
Relation with Previous Studies.
For the privacy-preserving graph data, we systematically review the privacy attackers and the corresponding privacy protection techniques, which takes a balance of classic methods <cit.> and emerging solutions <cit.>, such as topology perturbation methods, deep generation methods, etc. Beyond that, we extend the privacy-preserving techniques review from the data level to the computation level, i.e., the graph data privacy-preserving computation within the federated learning framework. Most of the existing federated learning reviews do not primarily concentrate on graph federated learning <cit.>. Recently, two survey papers <cit.> introduce two problem settings in graph federated learning and their corresponding techniques. They exclusively focus on graph federated learning solutions and ignore the connections to traditional federated learning. Thus, we start from various application scenarios and provide a comprehensive classification and exposition of graph federated learning. While our focus primarily revolves around graph federated learning, we also highlight its connections and distinctions to traditional federated learning, aiming to present the big picture of this field. In addition to reviewing the two aspects (i.e., privacy-preserving graph data and graph data privacy-preserving computation), we also discuss the necessity and possibility of combining these two directions and propose several promising future research directions.
§ PRIVACY-PRESERVING GRAPH DATA
As for making privacy-preserving graph data to publish or share, the ultimate goal is to successfully protect the published graph data from various attacks from adversaries or attackers. To this end, we first introduce the different kinds of attackers, such as node identity disclosure or sensitive link re-identification in Subsection 2.1 and necessary background knowledge in Subsection 2.2. Then, we introduce how the corresponding privacy-preserving mechanisms are proposed, such as several of them being deliberately designed to defend against certain attackers and some of them being general protections and not aiming at specific attacks, in Subsection 2.3. The taxonomy is shown in Figure <ref>.
§.§ Privacy Attackers on Graphs
According to <cit.>, what the attackers aim to attack is that they (1) want to learn whether edges exist or not between specific target pairs of nodes and also (2) want to reveal the true identities of targeted users, even from just a single anonymized copy of the graph, with a surprisingly small investment of effort.
§.§.§ Category of Attackers
Attackers can be classified into the active attackers and passive attackers <cit.>.
The first category is active attackers, where the core idea is that the attackers actively plant certain structures into the graph before it is being published. Then, the attackers can identify victims in the published graph by locating the planted structures. For example <cit.>,
the attackers create a subgraph H containing k nodes and then use H to connect b target nodes in the original graph G (subgraph H is better to be unique and has the property to be recovered in the published graph). After the original graph G is privacy-preserved (e.g., mask and disturb connections) and published as G', the attackers try to find H in G' and then determine those b nodes.
Active attackers usually need to access the original graph beforehand and then make corresponding active actions like creating new nodes, linking new edges, and planting subgraphs. The planting and recovery operations are usually computationally costly <cit.>. Therefore, another direction points to passive attacks and defense.
Passive attackers are based on the fact or the assumption that most entities (e.g., nodes and edges) in graphs usually belong to a unique, small identifiable graph. Then, different from active attackers, passive ones do not need to create new nodes and edges in the original but mostly rely on the observation of the published graph to identify victims. In the initial proposal of passive attacks <cit.>, a passive attacker (e.g., a node in a social network) needs to collude with other (k-1) nodes on the original graph, and the coalition needs to know the external information (e.g., their 1-hop neighbors' name in the social network), such that they can reconnect on the published graph to identify the victims. Here, we expand the scope of passive attacks to include the attackers whose core is observation plus little external information. For example, in <cit.>, an attacker knows the external background information like “Greg is connected to at least two nodes, each with degree 2” and tries to observe the candidate of plausible Greg in the published social network.
§.§.§ Goal of Attackers
The ultimate goals of most graph privacy attackers can be roughly divided into disclosing the node identity (e.g., name, DOB, and SSN in the social network) and the link existence (e.g., sensitive connections in the social network) <cit.>. Next, we formally introduce the general definition of these two goals.
Node Identity Disclosure.
The node identity disclosure problem often arises from the scenario that the attackers aim to identify a target node identity in the published graph (usually, which has been anonymized already). For example, in a published social network with usernames masked already, the node identity disclosure aims to identify which node is Greg <cit.>. To be more specific, the identity disclosure can be detailedly divided into node existence disclosure (i.e., whether a target node existed or not in a published graph), node property disclosure (i.e., partial features of a target node are disclosed like its degree, distance to the center, or even sensitive labels, etc) <cit.>.
Link Re-Identification.
In a given graph, edges may be of different types and can be classified as either sensitive or not. Some links (i.e., edges) are safe to release to the public, such as classmates or friendships. And some links are sensitive and should maintain private but not published, like the personal disease records with hospitals. The problem of link re-identified is defined as inferring or predicting sensitive relationships from anonymized graphs <cit.>. Briefly speaking, the adversary (or attacker) achieves the goal when it is able to correctly predict a sensitive link between two nodes. For example, if the attacker can figure out which there is a transaction between two users, given the properties of the released financial graph. Also, there are some detailed categorizations of the line re-identification other than the link existence, such as the link weight and link type or labels <cit.>.
Compared with active attackers, passive attackers are typically efficient in executing for adversaries and do not need to interact with the original graph beforehand very much. Thus, within the scope of passive attackers, achieving those attacking goals (node identity disclosure or link re-identification) relies on the observation of the published graph and certain external background knowledge to further identify victims.[Node identity disclosure and link re-identification can also be achieved in active ways <cit.>, but in the paper, we focus on introducing the passive manners that achieve those goals.] Next, we focus on introducing what requirements passive attackers need to execute attacks passively.
§.§ Background Knowledge for Passive Attacks
Here, we first discuss some background knowledge that could contribute to the goal of node identity disclosure. Then, we list some background knowledge that could contribute to sensitive link re-identification attacks.
§.§.§ Background Knowledge for Node Identity Disclosure
In general, the background knowledge for achieving node identity disclosure is to help them to detect the uniqueness of victims (i.e., nodes in the published graph) and thus narrow down the scope of candidate sets to increase the successful attack probability. For example, assume that the attackers know some background knowledge ℋ about a target node, after that, the attackers observe the published graph and find 2 candidates satisfying the condition (i.e., ℋ), then the attackers have 50% confidence to reveal the identity of that target node in the published graph. Next, we introduce some methods to acquire background knowledge.
Vertex Refinement Queries <cit.>. These are interactive queries, which describe the local structure of the graph around a target node x. The initial query in vertex refinement queries is denoted as ℋ_0(x) that simply returns the label of node x in the labeled graph (or a constant ϵ in the unlabeled graph). And ℋ_1(x) returns the degree of node x. Then, iteratively, ℋ_i(x) is defined as the multiset of ℋ_i-1(·) queries on 1-hop neighbors of node x, which can be expressed as follows.
ℋ_i(x) = {ℋ_i-1(z_1), ℋ_i-1(z_2), …, ℋ_i-1(z_d_x)}
where d_x is the degree of node x. For example, in a social network, ℋ_2(Bob)={1,1,4,4} means that Bob has four neighbors their degrees are 1, 1, 4, and 4, respectively.
Subgraph Queries <cit.>. These queries assert the existence of a subgraph around a target node. Compared with the above vertex refinement queries, subgraph queries are more general (i.e., the information is not exclusively occupied to a certain graph structure) and flexible (i.e., informativeness is not limited by the degree of a target node). In brief, the adversary is assumed capable of gathering some fixed number of edges around a target node x and figuring out what subgraph structure those collected edges can form. For example, still targeting Bob in a social network, when collecting 3 edges, attackers can find 3 distinct neighbors. And collecting 4 edges can find a tree rooted by Bob. Those existences of structures form H such that attackers can use them to reveal the identity of Bob. Also, different searching strategies can result in different subgraph structures. For example, based on collecting 3 edges from Bob, breadth-first exploration may result in a star subgraph, and depth-first exploration may end up with a three-node-line. We refer to <cit.>, where a range of searching strategies are tested to empirically illustrate the descriptive power of background knowledge.
Hub Fingerprint Queries <cit.>. First of all, a hub stands for a node that has a high degree and a high betweenness centrality (i.e., the proportion of shortest paths in the graph that include that node) in the graph. Then, a hub fingerprint is the description of a node's connections to hubs. To be more specific, for a target node x, the corresponding hub fingerprint query ℋ_i(x) records the shortest distance towards each hub in a graph. In ℋ_i(x), i is the limit of measurable distance. For example, ℋ_1(Bob) = (1,0) means Bob is 1 distance away from the first hop and not connected to (or 1 distance non-reachable from) the second hub. And, ℋ_2(Bob) = (1,2) means that Bob is 1 distance away from the first hop and 2 distance away from the second hub.
Neighborhood Relationships Queries <cit.>. Targeting a node, if an adversary has background knowledge about its neighbors and the relationship among the neighbors, then the victim can be identified in the anonymized graph. To be specific, the neighborhood relationship query rely more on the isomorphism of the ego-graph (i.e., 1-hop neighbors) of a target node to reveal its identity, compared with iterative vertex refinement query <cit.> and general subgraph query <cit.>. For example, in a social network, if Bob has two close friends who know each other (i.e., are connected) and two close friends who do not know each other (i.e., are not connected), then this unique information obtained by the adversary can be used to find Bob in the published anonymized graph.
§.§.§ Background Knowledge for Link Re-Identification
Link Prediction Probabilistic Model <cit.>. This probabilistic model is proposed to determine whether a relationship between two target nodes. And different kinds of background information (i.e., observation) can be leveraged to formalize the probabilistic model, such as (1) node attributes, e.g., two social network users who share the same interest are more likely to be friends; (2) existing relationships, e.g., two social network users in the same community are more likely to be friends; (3) structural properties, e.g., the high degree nodes are more likely to connect in a graph; and (4) inferred relationships (i.e., a complex observation that is more likely based on the inference of the invisible relationship), e.g., two social network users are more likely to be friends if they both are close friends of a third user.
Mathematically, those above observations can be expressed for predicting the existence of a sensitive relation between node i and node j as P(e^s_ij|O), where e^s_ij stands for the sensitive relationship and O consists of several observations {o_1, …, o_n}. For example, if we use the second kind of information (i.e., existing relationships), then {o_1, …, o_n} is a set of edges between node i and node j with the edge type other than s, denoted as e^l_ij and l ∈{1, …, n} is the index of other edge relationships. To solve out P(e^s_ij|O), the noisy-or model <cit.> can be used as suggested by <cit.>, where each observation o_l∈{o_1, … , o_n} is considered as independent with each other and parameterised as λ_l∈{λ_1, … , λ_n}. Moreover, there is a leak parameter λ_0 to capture the probability that the sensitive edge is there due to other unmodeled reasons. Hence, the probability of a sensitive edge is expressed as follows.
P(e^s_ij = 1 | o_1, …, o_n) = 1 - ∏_l=0^n(1- λ_l)
where s in e^s_ij is the indicator of sensitive relationship, and the details of fitting the values of λ_l can be found in <cit.>.
Randomization-based Posterior Probability <cit.>. To identify a link, this observation is based on randomizing the published graph G' and counting the possible connections over a target pair of nodes i and j. And those countings are utilized for the posterior probability to determine whether there is a link between nodes i and j in the original graph G. Formally, the posterior probability for identifying the link e_ij in the original graph G is expressed as follows.
P(e_ij = 1 | G'_s) = 1/N∑^N_s=11 (G'_s(i,j) == 1)
where the attacker applies a certain randomization mechanism on the published graph G' N times to get a sequence of G'_s, and s ∈{1, …, N}. In each G'_s, if there is an edge connects the target nodes i and j, then the indicator function 1 (G'_s(i,j) == 1) will count one.
§.§ Privacy-Preserving Mechanisms
Here, we discuss some privacy-preserving techniques that are deliberately designed for specific attackers and also some general protection techniques that are not targeting attackers but can be widely applied.
§.§.§ Protection Mechanism Designed for Node Identity Dislosure
In general, the protection mechanisms are proposed to enlarge the scope of candidates of victims, i.e., reduce the uniqueness of victims in the anonymized graphs.
k-degree Anonymization <cit.>. The motivation for k-degree anonymization is that degree distribution is highly skewed in real-world graphs, such that it is usually effective to collect the degree information (as the background knowledge) to identify a target node. Therefore, this protection mechanism aims to ensure that there at least exist k-1 nodes in the published graph G', in which k-1 nodes share the same degree with any possible target node x. In this way, it can largely prevent the node identity disclosure even if the adversary has some background knowledge about degree distribution. To obtain such anonymized graph G', the method is two-step. First, for the original graph G with n nodes, the degree distribution is encoded into a n-dimensional vector 𝐝, where each entry records the degree of an individual node; And then, based on 𝐝, the authors proposed to create a new degree distribution 𝐝', which is k-anonymous with a tolerated utility loss (e.g., isomorphism cost) instanced by the L_1 distance between two vectors 𝐝 and 𝐝'. Second, based on the k-anonymous degree vector 𝐝', the authors proposed to construct a graph G' whose degree distribution is identical to 𝐝'.
k-degree Anonymization in Temporal Graphs <cit.>. For temporal graphs (i.e., graph structures and attributes are dependent on time <cit.>), this method aims to ensure that the temporal degree sequence of each node is indistinguishable from that of at least k-1 other nodes. On the other side, this method also tries to preserve the utility of the published graph as much as possible. To achieve the k-anonymity, the proposed method first partition n nodes in the original temporal graph G into m groups using k-means based on the distance of temporal degree vectors 𝐝 of each node, which is a T-dimensional vector records the degree of a node at different timestamp t.To realize the utility, constrained by the cluster assignment, the method refines 𝐝 of each node into 𝐝' while minimizing the L_1 distance between matrices 𝐃 and 𝐃' (which are stacks of 𝐝 and 𝐝'). After that, the anonymized temporal graph G' is constructed by 𝐃' to release for each timestamp individually.
k-degree Anonymization in Knowledge Graphs <cit.>. Different from the ordinary graph, the knowledge graph has rich attributes on nodes and edges <cit.>. Therefore, the k-degree is upgraded with the k-attributed degree that aims to ensure a target node in the anonymized knowledge graph has k-1 other nodes who share the same attributes (i.e., node level) and degree (i.e., edge level) <cit.>. Then the k-degree anonymization solution gets upgraded in <cit.>, which aims to solve the challenge when the data provider wants to continually publish a sequence of anonymized knowledge graphs (e.g., the original graph needs to update and so the anonymized does). Then, in <cit.>, the k-ad (short for k-attributed degree) is extended to k^ω-ad, which targets to defend the node identity disclosure in the ω continuous anonymized versions of a knowledge graph. The basic idea is to partition nodes into clusters based on the similarity of node features and degree; Then, for the knowledge graph updates (like newly inserted nodes or deleted nodes), manual intervention is applied (e.g., adding fake nodes) to ensure the k^ω anonymity; Finally, the anonymized knowledge graph gets recovered from the clusters. This initial idea <cit.> gets further formalized and materialized in <cit.>.
k-neighborhood Anonymization <cit.>. This protection is proposed to defend the node identity disclosure when the adversary comprehends the background knowledge about neighborhood relationships of a target node (i.e., Neighborhood Relationship Queries discussed in Subsection 2.2.1). The core idea is to insert nodes and edges in the original graph G to get an anonymized graph G', such that a target node x can have multiple nodes whose neighborhood structure is isomorphic in G'. Given a pair node v and u in graph G (suppose node v is the target), the authors first propose the neighborhood component and use DFS search to encode the ego-net Neighbor_G(v) and Neighbor_G(u) into vectors. Then, by comparing the difference between Neighbor_G(v) and Neighbor_G(u), the authors then greedy insert missing (labeled) nodes and edges (into Neighbor_G(v) or Neighbor_G(u)) to make Neighbor_G(v) and Neighbor_G(u) isomorphic. Those inserted nodes and edges make G into G'.
k-automorphism Anonymization <cit.>. This method is proposed for the structural queries by attackers, especially for the subgraph queries (as discussed in Subsection 2.2.1). Basically, given an original graph G, this method produces an anonymization graph G' to publish, where G is the sub-graph of G' and G' is k-automorphic. To do this, the authors propose the KM algorithm, which partitions the original graph G and adds the crossing edge copies into G, to further convert G into G'. Hence, the G' can satisfy the k-different match principle to defend the subgraph query attacks, which means that there are at least k-different matches in G' for a subgraph query, but those matches do not share any nodes.
§.§.§ Protection Mechanism Designed for Link Re-Identification
The general idea of solutions here is proposed to reduce the confidence of attackers (which usually can be realized by a probabilistic model) for inferring or predicting links based on observing the published anonymized graphs.
Intact Edges <cit.>. This solution is straightforward and trivial. Given the link re-identification attacker aims to predict a target link between two nodes, and the corresponding link type (i.e., edge type) is denoted as s, then the intact edges strategy is to remove all s type edges in the original graph G and publish the rest as the anonymized graph G'. Those remaining edges are so-called intact.
Partial-edge Removal <cit.>. This approach is also based on removing edges in the original graph G to publish the anonymized graph G'. Partial-edge removal does not exhaustively remove all sensitive (indexed by s type) edges in G, but it removes part of existing edges. Those removed existing edges are selected based on the criteria of whether their existence contributes to the exposure of sensitive links, e.g., they are sensitive edges, they connect high-degree nodes, etc. Even those removals can be selected randomly.
Cluster-edge Anonymization <cit.>. This method requires that the original graph G can be partitioned into clusters (or so-called equivalence classes) to publish the anomymized graph G'. The intra-cluster edges are removed to aggregate a cluster into a supernode (i.e., the number of clusters in G is now the number of nodes in G'), but the inter-cluster edges are reserved in G'. To be more specific, for each edge whose edge type is not sensitive (i.e., not s type), if it connects any two clusters, it will be reserved in G'; otherwise, it will be removed. It can be observed that this method needs the clustering pre-processing, which also means that it can cooperate with the node anonymization method. For example, the k-anonymization <cit.> can be applied on the original graph G first to identify the equivalence classes, i.e., which nodes are equivalent in terms of k-anonymization (for example, nodes who have the same degree).
Cluster-edge Anonymization with Constraints <cit.>. This method is the upgraded version of the previous cluster-edge anonymization, and it is proposed to strengthen the utility of the anonymized graph G' by adjusting the edges between clusters (i.e., equivalence classes). The core idea is to require the equivalence class nodes (i.e., cluster nodes or supernodes in G') to have the same constraints as any two nodes in the original graph G. For example, if there can be at most two edges of a certain type between nodes in G, there can be at most two edges of a certain type between the cluster nodes in G'.
§.§.§ General Privacy Protection Mechanisms
Besides the protections that are designed deliberately for the node identity disclosure and link re-identification risks, there are also other protection mechanisms that are not designed for a specific kind of attacker but for the general and comprehensive scenario, such as randomized mechanisms with constraints and differential privacy schema. Next, we will discuss these research works.
Graph Summarization <cit.>. This method aims to publish a set of anonymized graphs G' given an original graph G, through the graph summarization manner. To be specific, this method relies on a pre-defined partitioning method to partition the original graph G into several clusters, then each cluster will just serve as a node in the anonymized graph G'. The selection of connecting nodes in G' results in the variety of G', which means that a sequence of G' will appear with a different edge connecting strategy. The detailed connection strategy can be referred to <cit.>.
Switching-based Graph Generation <cit.>. Here, the authors aim to publish the anonymized graph G' that should also preserve the utility of the original graph G. Therefore, they propose the graph generation method based on the switching operations that can preserve the graph features. Moreover, the switching is realized in an iterative Monte Carlo manner, each time two edges (a, b) and (c, d) are selected. Then they will switch into (a, d) and (b, c) or (a, c) and (b, d). The authors constrain that two selected edges are switchable if and only if the switching generates no more edges or self-edges, such that the overall degree distribution will not change. After sufficient Monte Carlo switching operations, the authors show that the original graph features (e.g., eigenvalues of adjacency matrix, eigenvectors of Laplacian matrix, harmonic mean of geodesic path, and graph transitivity) can be largely preserved in the anonymized graph G'.
Spectral Add/Del and Spectral Switch <cit.>. The idea of this method starts from Rand Add/Del and Rand Switch. Rand Add/Del means that the protection mechanism randomly adds an edge after deleting another edge and repeats multiple times, such that the total number of edges in the anonymized graph will not change. Rand Switch is the method that randomly switches a pair of existing edges (t,w) and (u, v) into (t,v) and (u,w) (if (t,v) and (u,w) do not exist in the original graph), such that the overall degree distribution will not change. In <cit.>, the authors develop the spectrum-preserving randomization methods Spectral Add/Del and Spectral Switch, which preserve the largest eigenvalue λ_1 of the adjacency matrix 𝐀 and the second smallest eigenvalue μ_2 of the Laplacian matrix 𝐋 = 𝐃 - 𝐀. To be specific, the authors first investigate which edges will cause the λ_1 and μ_2 increase or decrease in the anonymized graph and then select the edges from different categories to do Rand Add/Del and Rand Switch to control the values of λ_1 and μ_2 not change too much in the anonymized graph.
RandWalk-Mod <cit.>. This method aims to inject the connection uncertainty by iteratively copying each existing edge from the original graph G to an initial null graph G' with a certain probability, guaranteeing the degree distribution of G' is unchanged compared with G. Starting from each node u in the original graph G, this method first gets the neighbor of node u in G denoted as 𝒩_u. Then for each node in 𝒩_u, this method runs multiple random walks and denotes the terminated node in each walk as z. Finally, RandWalk-Mod adds the edge (u,z) to G' with certain probabilities under different conditions (e.g., 0.5, a predefined probability α, or 0.5d_u - α/d_u-1, where d_u is the degree of node u in G).
Next, we introduce an important component in the graph privacy-preserving techniques, i.e., differential privacy <cit.>. The general idea of differential privacy is that two adjacent graphs (e.g., one node/edge difference between two graphs) are indistinguishable through the permutation algorithm ℳ. Then, this permutation algorithm ℳ satisfies the differential privacy. The behind intuition is that the randomness of ℳ will not make the small divergence produce a considerably different distribution, i.e., the randomness of ℳ is not the cause of the privacy leak. If the indistinguishable property is measured by ϵ, then the algorithm is usually called ϵ-differential privacy algorithm. The basic idea can be expressed as follows.
Pr[ℳ(G) ∈ S]/Pr[ℳ(G) ∈ S]≤ e^ϵ
where G and G' are adjacenct graphs, ℳ is the differential privacy algorithm, and ϵ is the privacy budget. The above equation illustrates that the probability of the same output range is almost equivalent.
Within the context of graph privacy, the differential privacy algorithm can be roughly categorized as edge-level differential privacy and node-level differential privacy. Given the input original graph G, the output graph of the differential algorithm ℳ(G) can be used as the anonymized graph G' to publish.
Edge-level Differential Privacy Graph Generation. We first introduce the edge-level differential privacy algorithms, which means that the privacy algorithm can permute adjacent graphs (e.g., one edge difference) indiscriminately.
* DP-1K and DP-2K Graph Model <cit.>. This edge-level differential privacy algorithm is proposed with the utility preserving concern of complex degree distribution. Here, 1k-distribution denoted by P_1(G) is the ordinary node degree distribution in graph G, e.g., the number of nodes having 1 degree is 10 then P_1(1) = 10, the number of nodes having 2 degrees is 5 then P_1(2) = 5, etc. 2K-distribution denoted by P_2(G) is the joint graph distribution in graph G, e.g., the number of edges connecting an i-degree node and a j-degree node, with iterating i and j. And P_2(2,3) = 6 means that the number of edges in G connecting a 2-degree node and a 3-degree node is 6. Hence, DP-1K (or DP-2K) Graph Model first computes the 1K-(or 2K-) degree distribution P_1(G) (or P_2(G)) and then permutes the degree distribution under the edge-level DP to obtain the P_1(G)' (or P_2(G)'). Finally, an off-the-shelf graph generator (e.g., <cit.>) is called to build the anonymized graph G' based on P_1(G)' (or P_2(G)').
* Local Differential Privacy Graph Generation (LDPGEN) <cit.> is motivated by permuting the connection distribution, i.e., proportionally flipping the existing edge to non-existing and vice versa. To make the generated graph preserve the original utility, LDPGEN <cit.> first partitions the original graph G into the disjoint clusters and adds Laplacian noise on the node's degree vector in each cluster, which guarantees the local edge-level differential privacy. After that, the estimator is used to estimate the connection probability of intra-cluster edges and inter-cluster edges based on the noisy degree vectors, such that the anonymized graph G' is generated.
* Differentially Private Graph Sparsification <cit.>. On the one hand, this method constrains the number of edges in the anonymized graph G' is less than the original graph G to a certain extent. On the other hand, the method requires that the Laplacian of the anonymized graph G' is approximated to the original graph G (i.e., see Eq.1 in <cit.>). The two above objectives are unified into an edge-level differential privacy framework. The new graph G' is then obtained by solving an SDP (i.e., semi-definite program) problem.
* Temporal Edge-level Differential Privacy. In <cit.>, two temporal graphs are adjacent if they only differ in one update (i.e., the existence and non-existence of a temporal edge, different weights of an existing temporal edge). Based on the Priv-Graph algorithm (i.e., adding noise to graph Laplacian matrix), Sliding-Priv-Graph <cit.> is proposed to (1) take recent updates and ensure the temporal edge-level differential privacy and (2) meet the smooth Laplacian property (i.e., the positive semi-definite of consecutive Laplacian matrices). Moreover, in <cit.>, the authors distinguish the edge-adjacency and node-adjacency in the temporal graphs. Two temporal graphs are node-adjacent (or edge-adjacent) if they only differ in one node (or edge) insertion or deletion.
* Deep Graph Models with Differential Privacy. Following the synergy of deep learning and differential privacy <cit.>, another way to preserve privacy is targeting the gradient of deep graph learning models. In <cit.>, a deep graph generative model called DPGG_AN is proposed under the edge-level differential privacy constraints, where the privacy protection mechanism is executed during the gradient descent phase of the generation learning process, by adding Gaussian noise to the gradient of deep learning models.
Node-level Differential Privacy Graph Generation. Compared with edge-level differential privacy, node-level differential privacy is relatively difficult to be formalized and solve. In <cit.>, authors contribute several theoretical node-level differential privacy solutions such as Flow-based Lipschitz extension and LP-based Lipschitz extensions. But they all focus on realizing part of the graph properties instead of the graph data itself, such as anonymized degree distribution, subgraph counting, etc. The same kind of research flavor also appeared in relevant node-level differential privacy works like <cit.>. Again, differential privacy mechanisms on graphs is a large and comprehensive topic, a more detailed introduction and extensive literature review can be found in <cit.>.
§.§ Other Aspects of Graph Anonymization
Here, we would also like to review several graph anonymization techniques, but the difference from the majority mentioned above is that: they are not publishing the anonymized graph G' but anonymize some non-trivial and graph statistics of the original graph G and release them to the public <cit.>. The central requirement for protecting the graph statistics is that some scalar graph parameters are essential to describe the graph topology (e.g., degree distributions) or even reconstruct the graph topology (e.g., the number of nodes and edge connection probability in the Erdos-Renyi graph). To this end, some methods focus on protecting the important graph parameters and their statistics before releasing them. For example, the spectrum of a graph (i.e., eigen-decomposition of the graph Laplacian matrix) can preserve many important graph properties such as topological connections, low-pass or high-pass graph single filters, etc. Therefore, in <cit.>, the authors proposed to permute the eigen-decomposition under the differential privacy and then release the permuted parameters. To be specific, given the original eigenvalues and eigenvectors, certain calibrated random noises are sampled and added to them under the differential privacy constraint. Under the same protection mechanism, i.e., differential privacy, the protection goal is set to be the number of occurrences of subgraphs in <cit.>, the sequence of degree distribution in directed graphs and undirected graphs in <cit.>, and the edge connection probability of random graphs in <cit.>.
§.§ Challenges and Future Opportunities
After introducing different graph anonymization techniques, we would like to share some open questions and corresponding challenges.
§.§.§ Preserving Privacy for Temporal Graphs
As discussed above, most privacy-preserving graph anonymization methods still consider the input graphs as static. However, in complex real-world scenarios, the graphs are usually evolving over time <cit.>, which brings critical challenges to the current privacy-preserving static graph generation process. In other words, the time domain enriches the node attribute dimension and may also dictate the attribute distribution, which leads to increased exposure risk. For example, some graphs contain multiple dynamics and accurately representing them could contribute to graph tasks like classification <cit.>. But, the existence of various dynamics increases the probability of being unique and enlarges the leaking risk.
§.§.§ Preserving Privacy for Heterogeneous Graphs
During the node identity disclosure and link re-identification, it can be observed that the majority of background knowledge is solely from structural queries, which is already forceful enough. In heterogeneous graphs <cit.>, the abundant node and edge features increase the risk of leaking sensitive information and bring challenges to protection mechanisms, especially the heterogeneous graphs start to evolve <cit.>.
To the best of our knowledge, how to generate privacy-preserving heterogeneous or temporal graphs remains open.
* What kind of feature information is sensitive in heterogeneous or time-evolving graphs and should be hidden in the generated graph?
* If the corresponding sensitive information is determined, what techniques are effective for protecting structures and features in the heterogeneous or time-evolving environment?
* Last but not least, if the corresponding protection mechanism is designed, how to maintain the generation utility simultaneously with privacy constraints?
§ GRAPH DATA PRIVACY-PRESERVING COMPUTATION
In recent years, graph machine learning has become increasingly popular due to the abundance of graph-structured data in various domains, such as social networks, recommendation systems, and bioinformatics. However, graph data is usually distributed in multiple data sources, and each data owner does not have enough data to train satisfactory machine learning models, which require a massive amount of graph data. For example, biochemical industries may wish to collaboratively train a graph neural network model to predict the property of molecules. While we introduce one solution with privacy-preserving graph data generation in the last section, another solution is to enable multi-party computation without exchanging raw data. In this section, we introduce federated learning (FL) <cit.>, a machine learning system where multiple clients (i.e., data owners) collaboratively train machine learning models without exchanging their raw data. In particular, we first introduce the framework of federated learning and its applications with graph data in Subsection <ref>. Then we introduce important FL algorithms under three representative graph federated learning scenarios: graph-level FL (Subsection <ref>), subgraph-level FL (Subsection <ref>), and node-level FL (Subsection <ref>). Finally, we summarize the challenges of future opportunities of graph FL in Section <ref>.
§.§ Framework and Applications of Federated Learning
Federated learning (FL) <cit.> is a distributed learning system where multiple clients (i.e., data sources) collaborate to train a machine learning model under the orchestration of the central server (i.e., the service provider), while keeping their data decentralized and private <cit.>. This subsection provides an exposition on the FL framework, followed by an overview of the application of federated learning on graph data.
§.§.§ Federated Learning Framework
A typical FL framework has one central server and N clients, each with its own dataset 𝒟_i. The main steps can be summarized as follows:
* Parameter broadcasting. The server broadcasts the current global model to (selected) clients.
* Local update. Each client locally trains its local model.
* Parameter uploading. Each client sends upload the model update back to the server.
* Model aggregation. The server aggregates the model updates collected from clients and updates the global model.
* Repeat: Steps 1-4 are repeated for multiple communication rounds until the global model converges to satisfactory performance.
One of the most popular FL algorithms is FedAvg <cit.>. In each communication rounds, the server randomly selects a subset of clients, and broadcasts the global model to them. Each client locally updates the model with multiple iterations of stochastic gradient descent, and uploads its local model back to the server. Finally, the server computes a weighted average of local model parameters, and updates the global model parameters. Algorithm <ref> gives the pseudo-code of FedAvg. Notice that in FedAvg, local data never leaves the client side. Besides FedAvg, most of the FL algorithms strictly follow the aforementioned training protocol <cit.>, or roughly follow it with a few modifications <cit.>.
FL protects client privacy in two main ways. Firstly, instead of transmitting the raw data, FL transmits only the model parameters, which are updated based on the local data of each client. By doing so, FL ensures that sensitive data remains on the client's device and is not transmitted to the central server and other clients. Secondly, the model parameters uploaded to the server only reveal the distribution of local data, rather than individual data points. This approach helps to maintain privacy by obscuring the specific data points used to train the model.
FL can be equipped with differential privacy mechanisms <cit.> to enhance privacy protection. As described in the last section, differential privacy is a technique that involves adding noise to data in order to obscure individual contributions while still maintaining overall data patterns. However, different from graph generation, where the noise is added to the data (e.g., node feature, edges, etc), in the context of FL, the noise is added to the uploaded and downloaded model parameters. This ensures that even if an attacker were to obtain the model parameters, they would not be able to accurately infer the raw data from the model parameter. By adding moderate noise to the parameters, the model's accuracy may be slightly reduced, but the overall performance remains comparable to non-private models. In summary, by using differential privacy mechanisms, FL can achieve even better privacy protection by making it harder for attackers to identify the sensitive data contributed by individual clients.
§.§.§ Application of Graph Federated Learning
In this part, we introduce important applications of federated learning on graph data. Roughly, we survey three representative application scenarios: graph-level FL, subgraph-level FL, and node-level FL.
* Graph-level FL: Each client has one or several graphs, while different graphs are isolated and independent. One typical application of graph-level FL is for drug discovery <cit.>, where biochemical industries collaborate to train a graph neural network model predicting the property of molecules. Each molecule is a graph with basic atoms as nodes and chemical bonds as edges.
* Subgraph-level FL: Each client has one graph, while each graph is a subgraph of an underlying global graph. One representative application of subgraph-level FL is for financial transaction data <cit.>. Each FL client is a bank that keeps a graph encoding the information of its customers, where nodes are individual customers and edges are financial transaction records. While each bank holds its own graph, customers in one bank may have connections to customers in another bank, introducing cross-client edges. Thus, each bank's own graph is a subgraph of an underlying global graph.
* Node-level FL: Each client is a node of a graph, and edges are the pairwise relationships between clients, e.g., their distribution similarity or data dependency. One example is the smart city, where clients are traffic sensors deployed on the road and linked to geographically adjacent sensors. While clients form a graph, each client can make an intelligent decision based on the collected road conditions and nearby devices.
Figure <ref> illustrates the three application scenarios above. Next, we investigate each application scenario in the following three subsections individually.
§.§ Graph-level FL
In this subsection, we investigate graph-level FL. Graph-level FL is a natural extension of traditional FL: while each client has one or several graphs, different graphs are isolated and independent. The goal of each client is to train a graph neural network (GNN) model for a variety of local tasks, e.g., node-level (e.g., node classification), link-level (e.g., edge prediction), or graph-level (e.g., graph classification).
One of the most representative applications of graph-level FL is drug discovery, where graphs are molecules with atoms as nodes and chemical bonds as edges. Each FL client can be a pharmaceutical corporation that owns molecule data. Multiple corporations collaborate to train better model for molecular property prediction.
The biggest challenge of graph-level FL is the non-identical distribution among different clients' data. Since each client in FL collects their local data individually, their local datasets usually have a different distribution. For example, different pharmaceutical corporations may focus on different types of molecules. Such heterogeneity among clients' data distributions introduces optimization challenges to FL. Moreover, when clients' distribution is largely different, it might be harmful or even impossible to train one universal global model across all clients. More sophisticated techniques are required to achieve beneficial collaboration.
Next, we will introduce algorithms for graph-level FL in two parts: global federated learning and personalized federated learning. Since graph-level FL is a natural extension of traditional FL, we will cover both general FL algorithms and graph FL algorithms.
§.§.§ Global Federated Learning
Global federated learning (GFL) aims to train a shared global model for all clients. FedAvg <cit.> provides an initial solution for training GNNs with isolated graphs from multiple clients. However, when clients have significantly different underlying distributions, FedAvg needs much more communication rounds for convergence to a satisfactory model, and may converge to a sub-optimal solution <cit.>. This phenomenon of worse convergence is usually explained by weight divergence <cit.>, i.e, even with the same parameter initialization, the model parameters for different clients are substantially different after the first local stochastic gradient descent (SGD) step. With different model parameters, the mean of client gradients can be different from the gradient in centralized SGD, and introduce error to the model loss <cit.>.
Data-sharing.
To tackle the non-IID challenge to FL optimization, a simple but effective method is to share a small amount of data among clients. <cit.> first explore an association between the weight divergence and the non-IIDness of the data, and propose a method to share a small amount of data among the server and all clients. As a result, the accuracy can be increased by 30% for the CIFAR-10 dataset <cit.> with only 5% globally shared data. <cit.> further improves the privacy of this approach by sharing the average of local data points, instead of raw data. Specifically, each client uploads averaged data, receives averaged data from other clients, and performs Mixup <cit.> data augmentation locally to alleviate weight divergence. However, both methods require modification of the standard FL protocol and transmission of data. Another way to improve privacy is to share synthetic data generated by generative adversarial networks (GANs) <cit.>, instead of the raw data. The synthetic data can be a collection of each client's synthetic data generated with local GANs or generated with one global GAN trained in FL <cit.>. However, it is unclear whether GAN can provide enough privacy, since it may memorize the training data <cit.>.
Modifying local update.
Another line of research works modifies the local update procedure to alleviate weight divergence without changing the communication protocol of FL. FedProx <cit.> adds a proximal term to the local objective to stabilize the training procedure. The proximal term is the squared L2 distance between the current global model and the local model, which prevents the local model from drifting too far from the global model. SCAFFOLD <cit.> estimates how local updates deviate from the global update, and it then corrects the local updates via variance reduction. Based on the intuition that the global model can learn better representation than local models, MOON <cit.> conducts contrastive learning at the model level, encouraging the agreement of representation learned by the local and global models.
§.§.§ Personalized Federated Learning
While the aforementioned algorithms can accelerate the model optimization for GFL, one model may not always be ideal for all participating clients <cit.>. Recently, personalized federated learning (PFL) has been proposed to tackle this challenge. PFL allows FL clients to collaboratively train machine learning models while each client can have different model parameters.
Clustered FL.
In clustered FL, clients are partitioned into non-overlapping groups. Clients in the same group will share the same model, while clients from different groups can have different model parameters. In IFCA <cit.>, k models are initialized and transmitted to all clients in each communication round, and each client picks the model with the smallest loss value to optimize. FedCluster <cit.> iteratively bipartition the clients based on their cosine similarity of gradients. GCFL <cit.> generalizes this idea to graph data, enabling collaborative training with graphs from different domains. Observing that the gradients of GNNs can be fluctuating, GCFL+ <cit.> uses a gradient sequence-based clustering mechanism to form more robust clusters.
Personalized Modules.
Another prevalent way for PFL is personalized modules. In these works, the machine learning model is divided into two parts: the shared part and the personalized part. The key is to design a model structure suitable for personalization. For example, when a model is split into a feature extractor and classifier, FedPer <cit.> shares the feature extractor and personalizes the classifier, while LG-FedAvg <cit.> personalizes the feature extractor and shares the classifier. Similar techniques in used in FMTGL <cit.> and NGL-FedRep <cit.>. Moreover, PartialFed <cit.> can automatically select which layers to personalize and which layers to share. On graph data, <cit.> observe that while the feature information can be very different, some structural properties are shared by various domains, revealing the great potential for sharing structural information in FL. Inspired by this, they propose FedStar that trains a feature-structure decoupled GNN. The structural encoder is globally shared across all clients, while the feature-based knowledge is personalized.
Local Finetuning and Meta-Learning.
Finetuning is widely used for PFL. In these works, a global model is first trained with all clients. The global model encodes the information of the population but may not adapt to each client's own distribution. Therefore, each client locally finetunes the global model with a few steps of gradient descent. Besides vanilla finetuning, Per-FedAvg <cit.> combines FL with MAML <cit.>, an algorithm for meta-learning, to improve the performance of finetuning. Similarly, pFedMe <cit.> utilize Moreau Envelopes for personalization. It adds a proximal term to the local finetuning objective, and aims to find a local model near the global model, with just a few steps of gradient descent. GraphFL <cit.> applies a similar meta-learning framework on graph data, addressing the heterogeneity among graph data and handling new label domains with a few new labeled nodes.
Multi-task Learning.
PFL is also studied within the framework of multi-task learning. MOCHA <cit.> uses a matrix to model the similarity among each pair of clients. Clients with similar distribution will be encouraged to have similar model parameters. FedGMTL <cit.> generalizes this idea to graph data. Similarly, SemiGraphFL <cit.> computes pairwise cosine similarity among clients' hidden representations. As a result, clients with more similar data will have greater mutual influence. However, it requires the transmission of hidden representation. FedEM <cit.> assumes that each client's distribution is a mixture of unknown underlying distributions and proposes FedEM, an EM-like algorithm for multi-task FL. Finally, FedFOMO <cit.> allows each client to have a different mixture weight of local models during the aggregation steps. It provides a flexible way for model aggregation.
Graph Structure Augmentation.
In the previous works, graph structures are considered as ground truth. However, graphs can be noisy or incomplete, which can hurt the performance of GNNs. To tackle incomplete graph structures, FedGSL <cit.> optimizes the local client's graph and GNN parameters simultaneously.
§.§ Subgraph-level FL
Similar to graph-level FL, each client in subgraph-level FL holds one graph. However, clients' graphs are a subgraph of a latent large entire graph. In other words, there are cross-client edges in the entire graph, where the two nodes of these edges belong to different clients. The task is usually node-level, while the cross-client edges can contribute to the task.
One application of subgraph-level FL is financial fraud detection. Each FL client is a bank aiming to detect potential fraud with transaction data. Each bank keeps a graph of the information of its customers, where nodes are individual customers and edges are transaction records. While each bank holds its own graph, customers in one bank may have connections to customers in another bank, introducing edges across clients. These cross-client edges help to train better ML models.
The biggest challenge for subgraph-level FL is to handle cross-client edges. In GNNs, each node iteratively aggregates information from its neighboring nodes, which may be from other clients. However, during local updates in traditional FL, clients cannot get access to the data from other clients. Directly exchanging raw data among clients is prohibited due to privacy concerns. It is challenging to enable cross-client information exchange while preserving privacy. Moreover, when nodes' identities are not shared across clients, the cross-client edges can be missing and stored in none of the clients. Even if we collect clients' local subgraphs, we cannot reconstruct the global graph.
In this subsection, we will mainly focus on two scenarios. In the first part, we introduce algorithms when the hidden entire graph is given but stored separately in different clients. In the second part, we consider a more challenging setting: the cross-client edges are missing, and we cannot simply concatenate local graphs to reconstruct the entire graph losslessly. We focus on how to generate these missing edges or missing neighbors for each node.
§.§.§ Cross-client Propagation
When the cross-client edges are available, the major challenge is to enable cross-client information propagation without leaking raw data. FedGraph <cit.> designs a novel cross-client convolution operation to avoid sharing raw data across clients. It avoids exchanging representations in the first GCN layer. Similarly, FedPNS <cit.> control the number of neighbor sampling to reduce communication costs.
FedCog <cit.> proposes graph decoupling operation, splitting local graph to internal graph and border graph. The graph convolution is accordingly divided into two sequential steps: internal propagation and border propagation. In this process, each client sends the intermediate representation of internal nodes to other clients.
Considering that directly exchanging feature representations between clients can leak private information. In user-item graphs, FedPerGNN <cit.> design a privacy-preserving user-item graph expansion protocol. Clients upload encrypted item IDs to the trusted server, and the server matches the ciphertexts of item IDs to find clients with overlapping item IDs. DP-FedRec <cit.> uses private set intersection to exchange the edges information between clients and applies differential privacy techniques to further protect privacy.
Different from the above methods, FedGCN <cit.> does not rely on communication between clients. Instead, it transmits all the information needed to train a GCN between the server and each client, only once before the training. Moreover, each node at a given client only needs to know the accumulated information about the node's neighbors, which reduces possible privacy leakage.
§.§.§ Missing Neighbors
For some applications, the cross-client edges can be missing or not stored in any clients. Notice that although each client also holds a disjoint graph in graph-level FL, graph-level FL and subgraph-level FL with missing neighbors are substantially different. For graph-level FL, there are essentially no cross-client edges. For example, there are no chemical bonds between two molecules from different corporations' datasets. However, for subgraph-level FL, the cross-client edges exist, but are missing in certain applications. We may get suboptimal GNN models if ignoring the existence of cross-client edges. Therefore, the major challenge is to reconstruct these missing edges, or reconstruct missing neighbors for each node.
FedSAGE <cit.> first defines the missing neighbors' challenge, and proposes a method the generate pseudo neighbors for each node. It uses existing subgraphs to train a neighbors generator and generate one-hop neighbors for each client to mend the graph. Since missing neighbors are generated locally, no feature exchange is required between clients after the local subgraphs are mended. However, the training of neighbor generators requires cross-client hidden representation exchanges. Similarly, FedNI <cit.> uses a graph GAN model to generate missing nodes and edges.
§.§ Node-level FL
The final application scenario of graph federated learning is node-level. Different from the aforementioned two scenarios, each client in node-level FL can hold any type of data, not restricted to graphs. Instead, the clients themselves are nodes in a graph, while the edges are their pairwise relationship of communication or distribution similarity.
One typical application of node-level FL is the Internet of Things (IOT) devices in a smart building <cit.>. Due to bandwidth constraints, it can be costly for each IoT device to communicate with the central server. However, IoT devices in the same local area network can communicate very efficiently. As a result, IoT devices form a graph with pairwise communication availability as edges. Another application is for the smart city <cit.>, where clients are traffic sensors deployed on the road and linked to geographically adjacent sensors. Each device can collect data and make the real-time decision without waiting for the response of cloud servers. Each device needs to make an intelligent decision based on the collected road conditions and nearby devices.
In this subsection, we will first introduce algorithms where the graph models communication constraints among clients. In these works, there is no central server, and clients can only exchange information along edges. Then, we will introduce algorithms where the graph models the relationship between clients' distributions. In these works, although a central server is available, the graph among clients models distributional similarity or dependency among clients, potentially contributes to the model performance.
§.§.§ Graph as Communication Network
Traditional FL relies on a central server to enable communication among clients. Each client trusts the central server and uploads their model update to the server. However, in many scenarios, a trusted central server may not exist. Even when a central server exists, it may be expensive for clients to communicate with the server. Therefore, serverless FL (a.k.a. peer-to-peer FL) has been studied to relieve communication constraints.
The standard solution for serverless FL is fully decentralized FL <cit.>, where each client only averages its model parameter with its neighbors. D-FedGNN <cit.> uses these techniques to train GNN models. SpreadGNN <cit.> generalizes this framework to personalized FL, where each client has non-IID data and a different label space.
§.§.§ Graph as Distribution Similarities
When the central server is available, a graph of clients may still be beneficial when it models distributional relationships among clients. When edges link clients with highly similar distributions, parameter sharing along edges can potentially improve the model performance for both clients. When edges link clients with data dependency, information exchange along edges can even provide additional features for inference.
FedGS <cit.> models the data correlations of clients with a data-distribution-dependency graph, and improves the unbiasedness of the client sampling process. Meanwhile, SFL <cit.> assumes a pre-defined client relation graph stored on the server, and the client-centric model aggregation is conducted along the relation graph’s structure. GraphFL <cit.> considers client-side information to encourage similar clients to have similar models. BiG-Fed <cit.> applies graph convolution on the client graph, so each client's prediction can benefit from its neighbors with highly correlated data. Finally, <cit.> designs a client sampling technique considers both communication cost and distribution similarity.
Finally, we summarize the official implementation of FL algorithms and useful repositories in Table <ref>.
§.§ Challenges and Future Opportunities
In this part, we present several limitations in current works and provide open problems for future research.
§.§.§ Model Heterogeneity for Graph-Level FL
In previous works of graph-level FL, although each FL client usually has different data distribution it is usually assumed that the model architecture is shared across all clients. However, the optimal architecture for different clients can be different. For example, a well-known issue in GNNs is the over-smoothing problem. When the number of graph convolutional layers is higher than the diameter of the graph, GNN models may learn similar representations for all nodes in the graph, which harms the model performance. When each FL clients hold a substantially different size of graphs, it is highly likely that the optimal depth of the GNN model is different for them.
§.§.§ Avoiding Cross-Client Transmission for Sub-graph-Level FL
Most of the previous subgraph-level FL algorithms highly rely on direct information exchange along cross-client edges. While such operations are natural variants of graph convolution, such operations also raise privacy concerns. Moreover, different from traditional FL where each client downloads aggregated model parameters that reveal the population, feature exchange along the edges can expose information about individuals. It would be beneficial if the cross-client transmission can be avoided without greatly degrading the model.
§ ENVISIONING
In this section, we analyze the current developments and limitations of privacy-preserving graph machine learning, and explain the necessity of combining them. In addition, we identify a number of unsolved research directions that could be addressed to improve the privacy of graph machine learning systems.
§.§ Limitation of Current Techniques
In the previous two sections, we introduced privacy-preserving graph data generation and computation, respectively. However, both techniques have their own limitations.
* For privacy-preserving graph generation, while it can provide good privacy protection for graph data, it also has a significant drawback on model utility. The privacy-preserving techniques applied during data generation are not designed for specific machine learning tasks and may influence the utility of the resulting model. For example, consider a graph with four nodes a, b, c, and d. The nodes a and b have a positive label, while c and d have a negative label. Switching the edges from (a, b), (c, d) to (a, c), (b, d) does not change the degree distribution of the graph, but it changes the graph from a homophilous graph to a heterophilous graph, i.e., edges are more likely to link two nodes with different labels. This change can harm the performance of many GNN models, which are designed to work well with homogeneous graphs <cit.>. It is important to consider the downstream machine learning tasks when designing privacy-preserving techniques for graph data.
* For privacy-preserving graph computation, while FL can avoid the transmission of raw data, it has been shown that transmitting raw model parameters or gradients may not provide enough privacy, as attackers can use the gradient or model update to reconstruct private data <cit.>. Moreover, many subgraph-level and node-level federated learning algorithms require the transmission of hidden representations, which can also leak private information. Therefore, protecting the raw data from being reconstructed is essential to federated learning systems.
§.§ Combination of Privacy-Preserving Graph Data Generation and Computation
To address the limitations of current privacy-preserving techniques, it is essential to combine privacy graph data generation with the graph federated learning frameworks, as shown in Figure <ref>. This approach can provide an effective solution to the privacy preservation issues of graph machine learning models.
Specifically, the generated synthetic data is used instead of the real data during the training process. This means that even if the transmitted information is decrypted, it is just from the generated synthetic data and not the real data. The synthetic data can be generated in such a way that it preserves the statistical properties of the original data while ensuring privacy preservation. This can be achieved using various techniques, including differential privacy, homomorphic encryption, and secure multi-party computation.
The combination of privacy graph data generation and graph federated learning frameworks has several benefits. First, it ensures privacy preservation during the training process by using synthetic data. Second, it enables the transfer of graph machine learning model parameters rather than embedding vectors or other information. This can improve the accuracy and efficiency of the model. Finally, it provides a robust defense against privacy attacks and reverse-engineering, as the transmitted information is just from the generated synthetic data and not the real data.
§.§ Future Directions
Combining privacy-preserving data generation and computation is a promising approach to protect individual privacy while maintaining model utility in machine learning. However, it also poses several challenges and possible future directions.
§.§.§ Distribution of Privacy Budget
When combining privacy-preserving data generation with computation, noises are added to both raw data and model parameters. However, it is still unclear how to distribute the privacy budget between data generation and computation in a way that optimizes the privacy-utility trade-off. In this approach, noises are added to the graph data during data generation and to the model parameters during data computation (i.e., federated learning), which results in an overall reduction in accuracy. However, while the privacy analysis for data generation is directly defined on the data space, the privacy analysis for federated learning requires transforming the change on parameter space back to data space. Such transformation requires estimating the sensitivity of a machine learning algorithm (i.e., how the change of a data point affects the learned parameters), which is only loosely bounded in current works <cit.>. A more precise analysis of privacy is required to better understand the impact of privacy budget allocation on the overall privacy-utility trade-off.
§.§.§ Parameter Information Disentanglement
Another future challenge when combining privacy-preserving data generation and computation is the disentanglement of task-relevant and task-irrelevant information. Currently, the noise added to the model parameters is isotropic, meaning that task-relevant and task-irrelevant information are equally protected. However, not all information is equally important for model utility. If we can identify which information has a significant influence on model performance, we can distribute more privacy budget to this information while allocating less privacy budget to task-irrelevant information. This can result in a better privacy-utility trade-off. Disentangling task-relevant and task-irrelevant information would require a more sophisticated analysis of model architecture and data characteristics to determine which features contribute most to model performance.
§ CONCLUSION
In this paper, we review the research for privacy-preserving techniques for graph machine learning from the data to the computation, considering the situation where the data need to be shared or are banned from being transmitted. To be specific, for privacy-preserving graph data generation techniques, we analyze the forceful attackers first and then introduce how corresponding protection methods are proposed to defend attackers. For the privacy graph data computation, we circle around the federated learning setting and discuss how the general federated learning framework applied to graph data and what the potential challenges originated from non-IIDness, and how the nascent research works address them. In the end, we analyze the current limitation and propose several promising research directions.
§ ACKNOWLEDGEMENTS
This work is supported by the National Science Foundation (1947203, 2117902, 2137468, 1947135, 2134079, and 1939725), the U.S. Department of Homeland Security (2017-ST-061-QA0001, 17STQAC00001-06-00, and 17STQAC00001-03-03), DARPA (HR001121C0165), NIFA (2020-67021-32799), and ARO (W911NF2110088). The views and conclusions are those of the authors and should not be interpreted as representing the official policies of the funding agencies or the government.
abbrv
|
http://arxiv.org/abs/2307.04507v1 | 20230710120118 | Improving Factuality of Abstractive Summarization via Contrastive Reward Learning | [
"I-Chun Chern",
"Zhiruo Wang",
"Sanjan Das",
"Bhavuk Sharma",
"Pengfei Liu",
"Graham Neubig"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
An analysis of least squares regression and neural networks approximation for the pricing of swing options
[
==========================================================================================================
Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information. In this paper, we propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics. Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics using contrastive reward learning, leading to more factual summaries by human evaluations. This suggests that further advances in learning and evaluation algorithms can feed directly into providing more factual summaries. Code and human evaluation results will be publicly available at <https://github.com/EthanC111/factuality_summarization>.
§ INTRODUCTION
One major challenge in current abstractive summarization models is how to generate more factual summaries that are consistent with the source text <cit.>. Various approaches have been proposed to address this challenge, including augmenting the model input <cit.>, performing post-processing <cit.>, and modifying the learning algorithms <cit.>.
In particular, learning-based methods possess the advantage of not requiring modification to the existing model architecture or the addition of new modules.
In the meantime, with the growing interest in aligning learning objectives with evaluation criteria of interest, utilizing feedback of automatic evaluation metrics <cit.> or human preferences <cit.> as rewards for fine-tuning abstractive summarization models has gained substantial attention. These methods learn to optimize rewards using techniques such as reinforcement-learning (RL) <cit.>, minimum risk training (MRT) <cit.>, and contrastive reward learning (CRL) <cit.>.
Given the benefits of learning-based methods in improving factuality of abstractive summarization, and recent advancements in factuality metrics for detecting factual inconsistencies in generated summaries, it is of interest to apply reward learning to enforce models to learn from feedback of factuality metrics to improve the factuality of abstractive summarization models. We aim to investigate the following questions in this paper - Q1: Can contrastive reward learning effectively utilize existing factuality metrics to improve the factuality of abstractive summarization models? Q2: Can the improvement in factuality be reflected in human evaluation studies?
In this paper, we propose a contrastive reward learning framework that enables abstractive summarization models to directly learn from feedback of factuality metrics in a sample-efficient manner. In contrast to other contrastive learning frameworks <cit.>, our proposed framework does not rely on the complex construction of negative samples. Instead, similar to <cit.>, all candidate summaries used for contrastive learning are generated from pretrained sequence-to-sequence models <cit.> using diverse beam search <cit.>. Additionally, our framework also incorporates the use of quality metrics to provide more fine-grained information on the ranking (positive / negative) of candidate summaries. Specifically, we investigate learning from the rewards of two factuality metrics: BARTScore <cit.>
and DAE <cit.>. Through automatic and human evaluation studies, we demonstrate that our framework enables summarization models to generate significantly more factual summaries.
§ CONTRASTIVE LEARNING FROM FACTUALITY REWARDS
§.§ Contrastive Learning for Abstractive Summarization
Abstractive Summarization
Given a source document D, the summarization model learns a generative model g_θ, that converts the source document D into a summary S:
S = g_θ(D)
MLE Loss
Given a training sample pair {D,S^r} consists of source document D and reference summary S^r (note that S^r consists of L tokens, S^r = {s^r_1, ⋯, s^r_j, ⋯, s^r_L}), the MLE loss ℒ_mle aims to maximize the likelihood of reference summary S^r given the source document D:
ℒ_mle = log p_g_θ(S^r | D) = ∑_j = 1^Llog p_g_θ(s^r_j | D, s^r_<j)
where s^r_<j = {s^r_0, ⋯, s^r_j-1} and s^r_0 is a pre-defined start token.
Despite its effectiveness in enforcing generated summaries to align with the reference summaries, the MLE loss is not aware of the quality (evaluated by some quality metric M) of the generated summaries. To address this issue, we introduce a contrastive loss <cit.>.
Contrastive Loss
Given a training sample pair {D,S^r}, and that S_i, S_j are candidate summaries generated from a pre-trained model given D, and that M(S_i) > M(S_j) ∀ i,j,i < j [
M could be reference-free (e.g., BARTScore, DAE) or reference-based (e.g., ROUGE) metric. If M is a reference-free metric, then M(S_i) = M(S_i, D) ; if M is a reference-based metric, then M(S_i) = M(S_i, S^r)], the contrastive loss is defined as:
ℒ_ctr = ∑_i ∑_j > imax (0, f(S_j) - f(S_i) + λ_ij)
Note that λ_ij = (j - i) ×λ is the rank difference between two candidates times a constant λ (usually set as 1) [The magnitude of contrastive loss can be directly regulated through the weight of contrastive loss γ, so we simply set λ equal to 1.] and that f(S) is the length-normalized estimated log-probability given by:
f(S) = ∑_t=1^llog p_g_θ(s_t|D, S_<t)/|S|^α
where α is a constant.
Intuitively, the contrastive loss penalizes any discoordination between the length-normalized estimated log-probability and the quality metric evaluation (i.e., when f(S_j) > f(S_i) but M(S_i) > M(S_j)). The quality metric M could be any evaluation criteria, including automatic evaluation metrics <cit.>, or human preferences <cit.>.
Combined Loss
The combined loss used for fine-tuning is described by <ref>.
ℒ_com=ℒ_mle+γℒ_ctr
where ℒ_mle is the MLE loss given in <ref>, ℒ_ctr is the contrastive loss given in <ref>, and γ is the weight of contrastive loss. Summarization models fine-tuned with ℒ_com is referred as CRL-COM.
§.§ Reward from Factuality Metrics
We use two factuality metrics as quality metrics M for use in the contrastive loss described in <ref>.
BARTScore <cit.>'s factuality score is calculated as the log-likelihood of the summary given the source calculated from a reference-free version of BARTScore.
DAE <cit.> is calculated as the softmax output of the least-factual dependency-arc inside the sentences in the summary.
These two metrics were chosen for relative computational efficiency, as they are evaluated many times in the training process.
[
In contrast, QA-based factuality metrics are computationally inefficient <cit.>. As a result, they are less feasible for use in reward-learning settings.
]
§ EXPERIMENTS
§.§ Experimental Setup
Driven by the two research questions presented in the introduction, we train two kinds of factuality-driven summarization models, namely CRL-COM (B) and CRL-COM (D), trained from contrastive reward learning using BARTScore and DAE as quality metrics, respectively. A baseline summarization model CRL-COM (R) is also trained from contrastive reward learning using ROUGE as quality metric. Note that commonly used n-gram based metrics, including ROUGE <cit.>, have been shown to have a low correlation with human evaluations, particularly on factuality perspective <cit.>. Thus, we focus on evaluating the factuality of CRL-COM (B) and CRL-COM (D) compared to CRL-COM (R), with the hypothesis that CRL-COM (B) and CRL-COM (D) should be capable of generating more factual summaries compare to CRL-COM (R).
Datasets:
We use two abstractive summarization datasets – CNN/Daily Mail (CNNDM) dataset <cit.> and the XSUM dataset <cit.>. CNNDM summaries tend to be more extractive and are composed of multi-sentence summaries, while XSUM summaries are more abstractive and are composed of single-sentence summaries.
Models:
Following the setting outlined in <cit.>, we fine-tuned a pre-trained BART model <cit.> on the CNNDM dataset and a pre-trained PEGASUS <cit.> model on the XSUM dataset.
Implementation and Fine-tuning Details:
The combined loss (with weight of the contrastive loss γ = 100) described in <ref> is used to fine-tune the pre-trained models. Following <cit.> few-shot fine-tuning learning paradigm, we sampled 1000 training samples from each dataset for few-shot fine-tuning. A constant learning rate of 10^-5 and 10^-4 was applied to the fine-tuning process for the CNNDM and XSUM datasets, respectively, in order to facilitate fast convergence. For each dataset, we fine-tuned three models using three different quality metrics: ROUGE (R), BARTScore (B), and DAE (D), designated as CRL-COM (R), CRL-COM (B), and CRL-COM (D), respectively.
During validation, we employed the same quality metric used for fine-tuning for early stopping.
Automatic Evaluation
Each model is evaluated on three metrics: ROUGE (with variants ROUGE-1, ROUGE-2, ROUGE-L), BARTScore, and DAE.
Human Evaluation
To objectively evaluate the factual consistencies of the generated summaries from each model, we randomly sampled 100 samples from CNNDM and 200 samples from XSUM for human evaluation. We assess each summary from three different perspectives: Factuality (FAC), Coherence (COH), and Relevance (REL), with a particular emphasis on factuality. The assessment follow similar guidelines as in <cit.>. The evaluation guidelines provided to the annotators are listed in <ref>. An expert annotator is involved in the human evaluation studies.
§.§ Results and Analysis
Contrastive reward learning can enforce models to learn from feedback of factuality metrics
Driven by Q1, we observe that results from automatic evaluation presented in <ref> indicate that contrastive reward learning enables abstractive summarization models to develop in a direction that aligns with existing factuality metrics.
Learning from factuality metrics improves factuality of abstractive summarization.
Driven by Q2, we observe that results from human evaluation presented in <ref> indicate that on both datasets, CRL-COM (B) and CRL-COM (D) exhibit superior performance in terms of factuality compared to CRL-COM (R). This suggests that while learning from factuality metrics such as BARTScore and DAE may potentially result in sacrificing the performance of the models on ROUGE scores, the resulting models can generate more factually consistent summaries. In other words, summaries with higher BARTScore or DAE scores but lower ROUGE scores tend to be more factually consistent with the source article compared to those with lower BARTScore or DAE scores but higher ROUGE scores. This further supports the assertion that BARTScore and DAE are effective at capturing factual information.
Learning from factuality metrics did not sacrifice coherence and relevance.
According to human evaluations, the summaries generated by CRL-COM (B) and CRL-COM (D) showed comparable coherence and relevance to those generated by CRL-COM (R). This suggests that BARTScore and DAE has comparable abilities to ROUGE in terms of measuring coherence and relevance.
§ RELATED WORK
§.§ Factuality Metrics for Abstractive Summarization
Various factuality metrics assess the factual consistency between a summary and its corresponding source document. QA-based factuality metrics leverage question generation (QG) models to generate questions from the summary and question answering (QA) models to answer those questions, given both the source and summary <cit.>. Factuality is then evaluated based on the alignment between the answers from the source and summary. Another class of metrics, entailment-based factuality metrics <cit.>, evaluates whether all the information in the summary is entailed by the source document. Recent studies on leveraging pre-trained language model as evaluation <cit.> also achieve competitive performance on evaluating factuality.
§.§ Improving Factuality of Abstractive Summarization via Contrastive Learning
Several contrastive learning frameworks have been proposed to enable models to learn factuality from positive samples (such as reference summaries) and negative samples (such as edited reference summaries and system generated summaries). For example, CLIFF <cit.> and CO2Sum <cit.>. Both of which are similar in nature but CO2Sum employs more sophisticated methods for negative sample construction.
§ CONCLUSION
In this work, we present a simple contrastive reward learning framework that enforces abstractive summarization models to learn from feedback of existing factuality metrics. Empirical studies demonstrate the effectiveness of this approach, showing that abstractive summarization models that learn from factuality metric feedback through contrastive reward learning can generate more factual summaries without sacrificing coherence or relevance. This suggests that further advancements in the reward learning paradigm and factuality metrics can facilitate the development of more factually consistent abstractive summarization models.
§ LIMITATIONS
While we have included two distinctive dataset (CNNDM and XSUM) in our experiments, more non-news datasets could be included in future studies. Other possibilities for future work include comparing the capability of RL-based reward learning and contrastive reward learning in improving the factuality of abstractive summarization models.
§ ETHICS STATEMENT
Even though some of the investigated systems may achieve a high level of factuality on the CNNDM dataset, this does not guarantee that they can be used as off-the-shelf factual consistent summarization models. Thorough evaluation should be conducted before using these models in high-stakes settings to ensure their reliability.
§ ACKNOWLEDGEMENTS
We would like to thank Yixin Liu for helpful discussion on BRIO. We would also like to thank Tanya Goyal for helpful discussion on DAE.
acl_natbib
|
http://arxiv.org/abs/2307.06114v1 | 20230712121353 | Infrared problem in quantum electrodynamics | [
"Paweł Duch",
"Wojciech Dybalski"
] | math-ph | [
"math-ph",
"math.MP"
] |
trees
decorations.pathmorphing
decorations.markings
[figure]name=
noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*
Proof.
thmchapTheorem
theoremeTheorem [section]
proposition[theoreme]Proposition
lemma[theoreme]Lemma
definition[theoreme]Definition
corollary[theoreme]Corollary
remark[theoreme]Remark
example[theoreme]Example
criterion[theoreme]Criterion
conjectureConjecture
assumptionAssumption
|
http://arxiv.org/abs/2307.04360v1 | 20230710061554 | Mean-field analysis of load balancing principles in large scale systems | [
"Illés Horváth",
"Márton Mészáros"
] | math.PR | [
"math.PR",
"60"
] |
Probe hyperon electric dipole moments with full angular analysis
Jianyu Zhang^1
August 12, 2023
================================================================
Load balancing plays a crucial role in many large scale systems. Several different load balancing principles have been proposed in the literature, such as Join-Shortest-Queue (JSQ) and its variations, or Join-Below-Threshold. We provide a high level mathematical framework to examine heterogeneous server clusters in the mean-field limit as the system load and the number of servers scale proportionally. We aim to identify both the transient mean-field limit and the stationary mean-field limit for various choices of load balancing principles, compute relevant performance measures such as the distribution and mean of the system time of jobs, and conduct a comparison from a performance point of view.
§ INTRODUCTION
For large scale service systems, where service resources (e.g. computing capacity) are distributed to several service units, load balancing plays a crucial role in distributing the total load of the system to ensure better overall service for the incoming tasks (jobs).
There are many different types of load balancing principles. Static load balancing does not take into account the state of the system, instead aiming for a balanced distribution based purely on the incoming jobs. Static load balancing is in general easy to set up, requires minimal overhead communication and performs well when the incoming jobs have some regular patterns.
However, in most systems the incoming jobs have some level of random variability. This situation is generally better handled by load balancing policies which take into account the current state of the system. Scheduling decisions may be based on different types of information, depending on what is available. In general, one of the most important parameters is the current load of the servers, as it is generally desirable to maintain a balanced load among all servers. If available, further information taken into account may include any of the following:
* the servers may be heterogeneous, with faster and slower servers;
* job and server types may be important in case the servers are heterogeneous and certain servers can serve certain types of jobs more efficiently;
* job sizes may be used to compute current server load more precisely;
* in some cases, physical location may play a role;
* there may be bottlenecks other than computing capacity in the system (e.g. bandwidth).
In many real-life systems, such information may not be available, but even if it is, there is a tradeoff: a complicated load balancing policy that requires too much communication and computation may generate a significant overhead cost, slowing down the entire system. Hence it is in general desirable to stick to simple load balancing policies. In the present paper, we provide a mathematical framework that does not include communication overhead costs. Such aspects can be addressed in the modeling in several ways; however, these are highly scenario-dependent, and as such, we decided to keep the model high-level.
We will discuss load balancing policies based exclusively on the queue length of servers. Job types, physical location and other bottlenecks will not play a role. We allow a heterogeneous server cluster, where there are several different types of servers, and the model can also incorporate processor sharing, where a server can serve multiple jobs simultaneously.
The server cluster model of the present paper will be described by a density-dependent Markov population model. As the system size goes to infinity, the mean-field limit of density-dependent Markov population models has been examined in the literature for both the transient regime (up to a finite time horizon) and in the stationary regime.
The transient limit object is deterministic and can be described as the solution a system of ordinary differential equations (ODEs) in case the Markov transition rates are Lipschitz-continuous <cit.>, or as the solution of a differential inclusion in case the transition rates are discontinuous <cit.>. Overall, these results are relatively straightforward to apply for the model in the present paper.
For the stationary regime, for Lipschitz-continuous transition rates, it is known that in the mean-field limit, the stationary distribution of the finite system concentrates on the unique asymptotically stable solution (attractor) of the limit system of ODEs <cit.>. Similar results available for the discontinuous setting, but only in case the attractor lies inside a domain where the transition rates are continuous <cit.>. We are not aware of any general results in case the attractor is at a discontinuity point of the transition rates, which happens to be the case for several of the load balancing policies discussed in the present paper.
The contributions of the paper are the following:
* Providing a high-level mathematical framework for modelling load balancing systems that accommodates several different load balancing principles.
* Identification of the mean-field limit in both the transient and stationary regime.
* Computation of the mean service time and also the service time distribution in the stationary mean-field limit. Computation techniques need to be adapted for discontinuities; these modified formulas are, to the best of our knowledge, novel.
* Numerical comparison of the various load balancing principles via simulation and theoretical computations for the mean-field limit.
All of the above is carried out for a fairly general setting, where the server cluster can be heterogeneous, and we will also allow a varying service rate, depending on the number of jobs in a given server. We will focus mostly on first-in-first-out (FIFO) service principle, but note that all calculations are straightforward to derive for limited processor sharing (LPS), where a server can serve multiple jobs simultaneously.
Rigorous proofs are not the main focus of the paper. We do refer to relevant rigorous results from the literature in cases where they are available, but only provide heuristic arguments for the novel cases. That said, numerical analysis does support the heuristic computations of the paper.
The codes used for the simulations and analytic calculations throughout the paper are available at <cit.>.
The rest of the paper is structured as follows: the rest of this section is dedicated to an overview of load balancing in the literature (Section <ref>), and to the necessary mathematical background in queueing theory (Section <ref>) and population processes (Section <ref>). Section <ref> describes the general setup of the server cluster we are interested in. Section <ref> describes the various load balancing principles. Section <ref> contains numerical experiments and comparison of the various load balancing principles, and Section <ref> concludes the work. The Appendix addresses a few related questions not strictly part of the main body of work, and also some further details.
§.§ Load balancing principles
One of the classic dynamic load balancing policies is Join-Shortest-Queue (JSQ), where the incoming job is assigned to the server with the shortest queue (lowest number of jobs) <cit.>. The upside of this method is that it offers very even balancing for homogeneous server clusters. However, it requires up-to-date knowledge of all server states, which may require a significant communication overhead.
Due to this, several variants of JSQ have been in use: for JSQ(d), the incoming job is scheduled to the shortest queue from among d servers, selected at random. This offers less balanced load distribution, but also requires less communication. d=1 corresponds to random assignment with no load balancing, and d equal to the total number of servers corresponds to JSQ; as d is increased, it offers better balancing but also more overhead communication. Interestingly, already for d=2, the resulting load balancing policy has certain asymptotic optimality properties <cit.>, often referred to as the power-of-2 (or power-of-d) policies. As a consequence, d is often selected relatively low, such as d=2 or d=5.
For Join-Idle-Queue (JIQ), the incoming job is scheduled to an idle server at random; if there are no idle servers, the assignment is random among all servers. Once again, this offers less balanced load distribution and less communication overhead than JSQ, but, similar to JSQ(d), has some nice asymptotic optimality properties. Mean-field analysis has been carried out for JIQ in <cit.>.
Another related load balancing policy is Join-Below-Threshold (JBT), which associates a threshold with each server; servers below their threshold are considered available and servers at or above their threshold are full. Jobs will be dispatched to a server randomly from among all available servers. This policy again offers less balancing than JSQ, but still offers protection against overloaded servers, and requires communication only when a server switches between available and full. For a full mean-field analysis and cluster optimization of JBT, we refer to <cit.>.
§.§ Birth-death processes and queues
The jobs arriving to and leaving a server's queue can be modelled with a birth-death process (Markov-queue).
For technical simplicity, we resort to finite queues, with the maximal queue length denoted by B and state space of a single queue Ω = {0,1,2,…,B}.
We assume Markov arrivals, that is, jobs arrive according to a Poisson process, and Markov service, that is, the time it takes to serve a job (once service has started) is exponentially distributed.
There are multiple service principles. For First-In-First-Out (FIFO) service principle, the server always serves the first job of a queue, while the other jobs wait. Whenever the first job has finished service, the server immediately starts serving the next job in the queue. For Limited Processor Sharing (LPS), the server can work on multiple jobs simultaneously. The maximum number of jobs served simultaneously is called the multi-programming level (MPL); further jobs in the queue wait and enter service in a manner similar to FIFO. We allow the service rate to depend on the number of jobs in the queue (this is particularly relevant for LPS, where multiple jobs can be served jointly for more efficient service overall). The choice of service principle has no effect on the queue length changes (no matter which job is served, queue length decreases by 1), but it does affect the system time of individual jobs. We will mostly focus on FIFO.
§.§ Density-dependent population processes
In this section, we present mathematical background and framework for density-dependent Markov population processes.
A density-dependent Markov population process has N interacting components, each of which is in a state from a finite set of local states S. The global state of the system is defined as the total number of individuals in each state, that is,
a vector
X^N∈{0,1,…,N}^|S|
with
X^N_1+…+X^N_|S|=N. The normalized global
state of the system can be defined as
x^N=X^N/N,
so
x^N∈ [0,1]^S with x^N_1+…+x^N_|S|=1.
Each component acts as a continuous time Markov chain. The rate of the transition from i ∈ S to j ∈ S is r_ij^N (for i ≠ j). The rates are assumed to be density-dependent, that is
r_ij^N = r_ij(x^N)
for some function r_ij:[0,1]^|S|→[0,∞]. In the classic setup defined by Kurtz <cit.>, the functions r_ij are usually assumed to be Lipschitz-continuous and independent of N. With this setup, x^N(t) is a continuous time Markov-chain. We define the mean-field equation of the system as the following:
/ṭv_i(t)=∑_j∈ S v_j(t)r_ji(v(t)), i∈ S,
where
r_ii:=-∑_j∈ S, j≠ ir_ij,
and
x_i^N(0)→ v_i(0) (for i=1,…, |S|), in probability as N→∞.
Lipschitz-continuity guarantees existence and uniqueness of the solution of (<ref>). The following result of Kurtz states mean-field convergence in the transient regime <cit.>:
Assuming r_ij (i,j∈ S), are Lipschitz-continuous and
x_i^N(0)→ v_i(0) i∈{1,…,|S|} , in probability,
then for any T>0 we have
lim_N →∞P(
sup_t ∈ [0,T]𝐱̅^N(t) - 𝐯(t) > ϵ) = 0.
Kurtz also proved that the standard deviation of x^N is of order 1/√(N) <cit.>.
An important concept related to Theorem <ref> is asymptotic independence, also known as propagation of chaos, stating that as N→∞, the evolution of two distinct queues is asymptotically independent. This is due to the fact that the evolution of a queue depends only on the global state, which is asymptotically deterministic.
We also have stationary mean-field convergence.
Given the following assumptions:
* r_ij are Lipschitz-continuous,
* the Markov process x^N(t) has a unique stationary distribution π^N
for each N, and
* (<ref>) has a unique stable attractor (ν_1,…,ν_|S|),
we have that the probability measure π^N on S converges in probability to the Dirac measure concentrated on ν.
Theorems <ref> and <ref> have been generalized in several directions during recent years. Benaïm and Le Boudec elaborated a framework applicable for a wider range of stochastic processes, which also allows the r_ij functions to have a mild dependency on N <cit.>.
The condition on Lipschitz-continuity can also be weakened. For discontinuous r_ij's, (<ref>) turns into a differential inclusion. A formal setup for differential inclusions is quite technical, and is omitted from the present paper. For a fully detailed setup, we refer to <cit.>, specifically Theorems 4 and 5, and <cit.>, Theorem 3.5 and Corollary 3.9 for a corresponding version of Theorem <ref>.
For a corresponding version of Theorem <ref> for discontinuous transition rates, we refer to <cit.>, where the main additional condition is that the unique attractor lies inside a domain where the r_ij are continuous.
The applicability of Theorems <ref> and <ref> will be addressed more in Section <ref>.
From Theorem <ref> it also follows that
lim_N→∞E(π^N)= ν,
so ν can be used as an approximation for E(π^N) for large N. E(π^N) here is basically an |S|-dimensional vector of distributions, which converges to a constant |S|-dimensional vector in distribution. The limit point can be interpreted as a distribution on S, and is the stable attractor ν.
§ SERVER CLUSTERS
The server cluster model examined in the present paper consists of N servers, each with a finite buffer, and a single common dispatcher. Jobs arrive to the dispatcher according to a Poisson process with rate Nλ (that is, the average arrival rate is λ per server). Each arriving job is instantly dispatched to one of the N servers; that is, the dispatcher maintains no queue.
The cluster may have K different server types. We assume K is fixed, independent from N.
The servers within each type are identical. Buffer sizes are denoted by B^(k) for each type k∈{1,…,K}. We assume service times are exponentially distributed; for each server type, the service rate can be constant or it may depend on the current queue length of the server.
Service rates are denoted by μ_i^(k), where i∈{ 0,1,…,B^(k)} is the queue length, and k∈{ 1,2,…,K} denotes the type of the server. For a given k∈{1,…,K}, μ_0^(k),…,μ_B^(k)^(k) is also referred to as the service rate curve. (μ_0^(k)=0, but we still include it in the notation.)
For each service rate curve, it is natural to assume that the total rate increases with the queue length, but the per-job rate decreases with the queue length:
μ_1^(k)≤μ_2^(k)≤μ_3^(k)≤…, μ_1^(k)≥μ_2^(k)/2≥μ_3^(k)/3≥… k∈{ 1,2,…,K}
Due to the finite buffer sizes, data loss may occur whenever a job is dispatched to a full queue. The probability of a job loss will be typically very low (due to load balancing), but it is still something that we will address in due course.
The server cluster is a density-dependent population process, where the state of a server is simply the number of jobs in its queue. The global state will be denoted by
X_i^(k),N(t), ( 0≤ i≤ B^(k), 1≤ k≤ K ),
where X_i^(k),N(t) is the number of servers with i jobs in its queue at time t. We will mostly use its normalized version
x^N(t)=x_i^(k),N(t), ( 0≤ i≤ B^(k), 1≤ k≤ K),
where
x_i^(k),N(t)=X_i^(k),N(t)/N.
The number of servers of type k is denoted by N_k and the ratio of each server type is denoted by
γ_k^N=N_k/N, k=1,…,K.
γ_k^N may depend on N, but we will assume they converge to some fixed values γ_k as N→∞. We also want the system to be stable, so
λ < ∑_k=1^K γ_k^N μ_B^(k).
(Actually, due to the finite buffer size assumption, the system is technically always stable, but we will nevertheless assume (<ref>).)
The evolution of x^N(t) can be formally defined using Poisson representation. Let
P_i→ (i+1),k(t), 0≤ i≤ B^(k)-1, k=1,…,K
P_i→ (i-1),k(t), 1≤ i≤ B^(k), k=1,…,K
denote independent Poisson processes with rate 1. P_i→ (i+1),k(t) corresponds to arrivals to queues of type k with length i, and P_i→ (i-1)(t) corresponds to jobs leaving queues of type k with length i.
The Poisson representation of x^N(t) is
x_i^(k),N(t)= 1/N P_(i-1)→ i,k(N ∫_0^t λ f^(k)_i-1(x^N(s))ṣ)
-1/N P_i→ (i+1),k(N ∫_0^t λ f^(k)_i(x^N(s))ṣ)
+1/N P_(i+1)→ i,k(N ∫_0^t μ^(k)_i+1 x_i+1^(k),N(s)ṣ)
-1/N P_i→ (i-1),k(N ∫_0^t μ^(k)_i x_i^(k),N(s)ṣ),
where f_i^(k)(x^N(t)) is the probability of a new arriving job to enter a queue with length i of type k.
The
{f_i^(k)(x^N(t)):0≤ i≤ B_k, k=1,…,K}
functions are going to be collectively called the dispatch functions. The dispatch functions depend on the load-balancing principle, which will be addressed later. Formally, f_i^(k) are defined on the normalized state x^N(t), which are all contained in the domain
{x:x∈ℝ^∑_k=1^K (B^(k)+1), x_j^(k)≥ 0, ∑_k=1^K ∑_j=0^B^(k)x_j^(k)=1}.
The four possible changes in the number of queues of length i which appear in (<ref>) correspond to:
* a job arriving to a queue of length i-1;
* a job arriving to a queue of length i;
* a job leaving a queue of length i+1;
* a job leaving a queue of length i.
On the border of the domain (<ref>), certain changes cannot occur. There is no service in empty queues:
μ_0^(k)=0 (k=1,…,K),
and no arrival to full queues:
f^(k)_B^(k)(.)≡ 0 (k=1,…,K).
We are interested in server clusters of various N sizes and especially the limit object as N→∞, that is, the mean-field limit (in accordance with Section <ref>). We first define the general mean-field equations corresponding to (<ref>):
v^(k)_i(t)= v^(k)_i(0)+∫_0^t λ f^(k)_i-1(v(s))ṣ -∫_0^t λ f^(k)_i(v(s))ṣ
+∫_0^t μ^(k)_i+1 v_i+1^(k)(s)ṣ -∫_0^t μ^(k)_i v_i^(k)(s)ṣ
in integral form, or, equivalently,
/ṭv_i^(k)(t)=λ f^(k)_i-1(v(t))-λ f_i^(k)(v(t))
+μ^(k)_i+1 v^(k)_i+1(t) - μ^(k)_i v^(k)
_i(t)
in differential form. An empty initial cluster corresponds to the initial condition
v_i^(k)(0)=
{[ γ_k for i=0,; 0 otherwise. ].
Theorem <ref> applies to this system whenever the f_i^(k) functions are Lipschitz-continuous. It turns out that the conditions of the general version of Theorem <ref> are mild enough so that transient mean-field convergence holds for all the discontinuous choices of f_i^(k) in the present paper, but this is not checked rigorously.
For the stationary case, we denote the stationary distribution
ν=(ν_i^(k)),i=0,…,B^(k), k=1,…,K
(similar to the notation of Section <ref>). Theorem <ref> applies whenever f_i^(k) are Lipschitz-continuous. In the discontinuous setting, the most relevant question is whether the f_i^(k) functions are continuous at the unique fixed point ν or not. If ν lies inside a region where f^(k)_i are Lipschitz-continuous, then the conclusion of Theorem <ref> applies. However, when the f_i^(k) functions are discontinuous at ν, Theorem <ref> does not apply; in fact, little is known in this case rigorously. Based on this, it makes sense to distinguish the following two cases:
* the functions f_i^(k) are Lipschitz-continuous at ν, or
* the functions f_i^(k) are discontinuous at ν.
When the functions f_i^(k) are Lipschitz-continuous at ν, the equations for the mean-field stationary distribution can be obtained from (<ref>) by setting /ṭv^(k)_i(t)=0:
0=λ f^(k)_i-1(v(t))-λ f^(k)_i(v(t))
+μ^(k)_i+1 v^(k)_i+1(t) - μ^(k)_i v^(k)_i(t)
i∈{1,…,B^(k)-1} , k∈{ 1,…,K }
which are equivalent to the dynamic balance equations
μ^(k)_i ν^(k)_i =λ f^(k)_i-1(ν), i∈{1,…,B^(k)} , k∈{ 1,…,K } .
We also have equations for the ratio of each server type:
∑_i=0^B^(k)ν^(k)_i=γ_k, k∈{1,…,K }.
(<ref>) + (<ref>) provide algebraic equations for ν.
We also propose another approach to obtain ν numerically, by solving the transient equations (<ref>) and taking the solution at a large enough point in time. (This assumes convergence to a single asymptotically stable solution, which we do not aim to prove rigorously.)
When the f^(k)_i are discontinuous at ν, more considerations are needed to derive the dynamic balance equations. This will be addressed separately for each load balancing principle.
Further remarks.
The assumption that both arrival and service are Markovian means that the entire system is a Markov (population) process, which keeps the setup fairly simple. Interestingly, the same mean-field limit would be obtained for any arrival process as long as the arrivals average out in the mean-field limit; to be more precise, for any arrival process for which the Functional Strong Law of Large Numbers holds (see e.g. Theorem 3.2.1 in <cit.>).
In case the monotonicity condition (<ref>) does not hold, mean-field convergence may fail. <cit.> contains specific examples where (<ref>) has multiple fixed points; stable fixed points correspond to quasi-stationary distributions of the population process for any finite N. The solution of (<ref>) will converge to one of the stable fixed points (depending on the initial condition). However, for any finite N, the population process will spend very long periods of time near one of the quasi-stationary points, switching between these points infinitely often.
§.§ Mean system time
A wide variety of parameters can be considered to describe the efficiency of such a system. A natural choice is the mean system time: the average time a job spends in the system between its arrival and service. We aim to calculate the mean system time H in the stationary mean-field regime. We note that the mean system time is a somewhat artificial object here since technically there are no individual jobs in the mean-field limit. It may be helpful to think of the mean-field limit as the case when N is extremely large.
One way to compute H is via Little's Law
H=L/λ_e,
where L is the mean queue length in the system, and λ_e is the effective arrival rate (which excludes jobs not entering the system due to job loss). From the mean-field stationary distribution ν, L is easily computed, while λ_e depends on the load balancing policy, but is typically also straightforward to compute. Little's law can actually be applied to each server type separately for more detailed information; this is addressed in Appendix <ref>.
Here we propose a different method to compute H, which gives even more detailed information, and will be useful later on. Let H_i,j^(k) denote the mean time until service for a job that is in position i in a queue of type k with j jobs total (so 1≤ i ≤ j ≤ B^(k), 1≤ k≤ K).
In the case of constant service rates, H^(k)_i,j= i/μ^(k) holds. For non-constant service rate curves however, the service rate may change due to later arrivals, so we need to keep track of both the length of the queue and the position of the job within it. We will derive a system of linear equations using total expectation and the Markov property. For simplicity, we assume FIFO service principle in the following calculations, but due to Little's law, this assumption does not affect the value of H.
The mindset is that we are following a tagged job at position i of a queue of type k with total queue length j, and the equations are based on possible changes in the queue, with the environment fixed due to the stationary mean-field regime.
H_i,j^(k) = 1/λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)+
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_i,j+1^(k)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)-1),
H_i,B^(k)^(k) =1/μ_B^(k)^(k)+H_i-1,B^(k)-1^(k) (2≤ i≤ B^(k)),
H_1,j^(k) =1/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)+λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H_1,j+1^(k)
(1≤ j≤ B^(k)-1),
H_1,B^(k)^(k) =1/μ_B^(k)^(k).
(<ref>) makes use of the standard one step argument. We focus on a single queue of a given type k in the mean-field limit while assuming the environment to be stationary, and look for the next possible change in that queue. Jobs arrive to type k servers of queue length j with a rate of Nλ f_j^(k)(ν), and each job will be sent to one of Nν_j^(k) servers, so the arrival rate at a specific queue will be
Nλ f_j^(k)(ν)/Nν_j^(k)=λf_j^(k)(ν)/ν_j^(k),
while the service rate is μ_j^(k), so the rate of any change for a queue of length j is λf_j^(k)(ν)/ν_j^(k)+μ_j^(k). The change will either increase or decrease the length of the queue by 1, and we can apply total expectation.
For full queues (j=B^(k)), arrival is not possible, that is, f_B^(k)^(k)(.)≡ 0 for k=1,…,K.
In order to solve (<ref>), we first obtain the mean-field stationary distribution ν. ν can be calculated from either the balance equations (<ref>) when possible, or by numerically solving the transient mean-field equations (<ref>) and setting t large enough. Once ν is obtained, (<ref>) is just a system of linear equations for H_i,j^(k), which can actually be solved separately for each k for 1≤ k ≤ K. Once (<ref>) is solved, the mean system time H is just a linear combination of the values H_j,j^(k) according to the probabilities with which a job will be scheduled to a queue of length j-1 of a k-type server, that is,
H=1/∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H_j,j^(k).
The normalizing factor in (<ref>) addresses job loss, as we only want to consider the mean system time of jobs which actually enter the system. Job loss probability is equal to
1-∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν).
(<ref>) and (<ref>) are only valid if the dispatch functions f_i^(k) are continuous at ν. In other cases, we may need to tweak the formulas. We will provide the corresponding versions of (<ref>) and (<ref>) on a case-by-case basis whenever the functions f_i^(k) are discontinuous at ν. These versions will be heuristic in the sense that no formal rigorous proof will be provided, but the results nevertheless agree with the results from simulations.
§.§ System time distribution
In this section, we calculate the system time distribution for a random job. Here, the service principle is actually important; we will present the calculation for FIFO service principle here. The calculations need to be modified for LPS service principle; the corresponding equations are provided in Appendix <ref>.
Let h_i,j^(k)(t) denote the probability density function of the remaining system time of a job at position i in a queue of length j and queue type k. Its Laplace-transform is defined as
H̃_i,j^(k)(s)=∫_0^∞ h_i,j^(k)(t)e^-stdt.
The following system of equations is the corresponding version of (<ref>) for the Laplace-transforms instead of the means. Total expectation also applies to Laplace-transforms, and we use the fact that the Laplace-transform of 0 is 1 and the Laplace-transform of λ e^-λ t is λ/s+λ to obtain
H̃_i,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i-1,j-1^(k)(s)) (2≤ i≤ j≤ B^(k)),
H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)) (1≤ j≤ B^(k)).
The corresponding version of (<ref>) is
H̃(s)= ∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H̃_j,j^(k)(s).
Once again, (<ref>) and (<ref>) are valid when the functions f_i^(k) are continuous at ν. In other cases, we may need to tweak the formulas on a case-by-case basis.
The system time distribution can then be computed in the following manner:
* We first compute the mean-field stationary distribution ν. This can be done either by solving the balance equations (<ref>), or by numerically solving the mean-field transient equations (<ref>), and setting a large enough t.
* Once ν is available, (<ref>) is a system of linear equations for H̃_i,j^(k)(s) that is straightforward to solve.
* Then H̃(s) is computed from (<ref>).
* Finally, H̃(s) is transformed back to time domain.
Due to (<ref>), H̃(s) is a rational function, whose inverse Laplace transform can be computed numerically. For numerical inverse Laplace transformation methods, we refer to <cit.>.
We note that this approach to compute H̃(s), while explicit, has its limitations, as the formula for H̃(s) can get complicated for even moderately large K and B^(k) values. We address the feasibility further in Section <ref>.
Job losses occur only upon arrival, that is, all jobs that actually enter the system will be served, so h_i,j^(k)(t) is a proper probability density function with
∫_0^∞ h_i,j^(k)(t) dt=1.
However, if
∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)<1,
then H̃(s) is the Laplace-transform of a nonnegative function whose integral is equal to 1-∑_k=1^K f^(k)_B^(k)(ν) where
∑_k=1^K f^(k)_B^(k)(ν)
is the job loss probability, so in this sense, job losses are included in (<ref>). The corresponding normalized version of (<ref>) is
1/1-∑_k=1^K f^(k)_B^(k)(ν)∑_k=1^K∑_j=1^B^(k) f^(k)_j-1(ν)H̃_j,j^(k)(s),
which is the Laplace-transform of a proper pdf whose integral is 1.
Depending on the load balancing principle, job losses may or may not be possible in the mean-field limit. This will be addressed specifically for each load balancing principle (For a finite system, job losses are always possible due to the finite buffers and fluctuations in either the job arrival or service speed.)
§ LOAD BALANCING PRINCIPLES
The load balancing principle describes the method the dispatcher uses to distribute the arriving jobs between the servers. It is quite important in large scale systems where the resources such as computing capacity are distributed between a large number of individual servers, and can make a big difference in the efficiency of the system.
The general goal of load balancing is to avoid long queues, directing incoming jobs to shorter queues instead.
There are several load balancing principles in use. Static policies do not consider the state of the system, only focusing on the incoming jobs. One example would be the round-robin load balancing policy, where incoming jobs are directed to the next server cyclically. Static load balancing principles are generally easy to operate, as they require minimal communication with the servers. Out of the principles observed in this paper, Random assignment falls into this category.
Dynamic principles, which take into account the current state of the system, can be more efficient. In real clusters, there is a trade-off: complicated policies require more communication and computation, generating a higher overhead communication cost, but provide better balancing. That said, in the mathematical framework we present, the cost of communication overhead is not modeled. Including the cost of overhead communication to provide an analytical framework for more realistic models is subject to further research.
In some systems it may be possible to reassign jobs that have been already assigned to new servers. It might also be possible that several servers “team up” to serve a single job. In our setting, we do not explore these options, and stick to a scenario where all jobs are assigned to a single server immediately upon arrival. On the other hand, in addition to the usual FIFO service principle, the framework does allow for limited processor sharing (LPS), where a single server can serve multiple jobs simultaneously.
In this paper we will examine 5 load balancing principles:
* Random assignment, where jobs are distributed randomly. With this principle, there is no actual load balancing. This principle will serve mostly as a baseline for comparison.
* Join-Idle-Queue, where jobs are directed to idle queues if possible. A relatively recent idea <cit.>, further explored in <cit.>.
* Join-Shortest-Queue, where jobs are directed to the server with the fewest number of jobs waiting in queue. One of the earliest load balancing policies that has been widely used for decades <cit.>. It provides very even balancing, but at the cost of high overhead communication, as the dispatcher needs to keep track of the queue length in every single server at all times.
* Join-Shortest-Queue(d), where jobs are directed to the server with the fewest number of jobs waiting in queue from among d servers selected randomly. Also referred to as power-of-d, this is a version of JSQ that aims to reduce communication overhead at the cost of less strict balancing. It has been thoroughly explored, and has certain asymptotical optimality properties already for d=2 <cit.>.
* Join-Below-Threshold, where jobs are directed to servers with a queue length below a prescribed threshold <cit.>.
All of the above principles are based on natural intuitions that aim towards directing jobs to shorter queues, but they differ in the details and execution of doing so. In this section, we overview these load balancing principles from the literature. We present a high-level mathematical framework based on the Poisson representation of Section <ref> that is applicable to all of them, with the only difference being the f_i^(k)(.) functions.
For each load balancing policy, we identify f_i^(k)(.), then write the mean-field equations corresponding to (<ref>). We also identify the mean-field stationary distribution ν whenever available explicitly.
In case the f_i^(k)(.) functions are discontinuous at ν, we also rewrite the formulas (<ref>) and (<ref>) so that they can be used to compute the mean system time, and rewrite the formulas (<ref>) and (<ref>) for system time distribution.
§.§ Random assignment
This is the most simple principle that we observe, and it does not lead to any balancing. With this setup the queues basically operate, and thus can be analyzed independently of each other. For random assignment,
f_i^(k)(x)=x^(k)_i, k∈{ 1,…,K} ,
and accordingly, the mean-field equation is
v_i^(k)(t)= ∫_0^t λ v^(k)_i-1(s)ṣ -∫_0^t λ v^(k)_i(s)ṣ
+∫_0^t μ_i+1 v^(k)_i+1(s)ṣ -∫_0^t μ_i v^(k)_i(s)ṣ.
The mean-field balance equations, obtained from (<ref>), are
μ_i^(k)ν_i^(k)=λν_i-1^(k) k∈{ 1,…,K} , i∈{ 1,…,B} .
Solving (<ref>) gives the mean-field stationary distribution
ν^(k)_i=c_k∏_j=1^iλ/μ^(k)_j, i∈{ 0,…,B^(k)} ,
with the c_k's coming from (<ref>). This is in accordance with the queues being independent.
Since the rates f_i^(k) are continuous, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution.
Job loss is possible for Random assignment, but is taken into account by the formulas (<ref>) and (<ref>).
§.§ Join-Idle-Queue
For Join-Idle-Queue (JIQ), incoming jobs are assigned to an idle server at random. If none of the servers are idle, a server is selected at random.
For JIQ, using the notation
y_0=∑_k=1^K x_0^(k),
we have
f^(k)_i(x)=
{[ x_i^(k)/y_0 if i=0, y_0 >0,; 0 if i>0, y_0 >0,; x_i^(k) if y_0=0. ].
This system has been addressed in <cit.> for constant service rate curve and a homogeneous cluster.
The structure of the mean-field stationary distribution ν depends on the relation between λ and ∑_k=1^K γ_k μ_1^(k). We address three cases separately.
§.§.§ JIQ, subcritical case
When
λ<∑_k=1^K γ_k μ_1^(k),
there will always be idle queues in the mean-field stationary limit, so all jobs will be directed to idle queues. ν is concentrated on queues of length 0 and 1. From (<ref>) we have
μ_1^(k)ν_1^(k)=λν_0^(k)/∑_k=1^K ν_0^(k).
We do not have an explicit solution to (<ref>), but it can be solved numerically, and numerical experiments suggest a single fixed point ν. In this region, the functions f_i are continuous, so (<ref>) and (<ref>) can be used to compute the mean system time H:
H=∑_k=1^K ν_0^(k)/∑_k=1^K ν_0^(k) H_1,1^(k),
and (<ref>) and (<ref>) can be used to compute the entire Laplace-transform of the system time distribution.
For subcritical JIQ, in the mean-field limit, there will be no job loss.
§.§.§ JIQ, critical case
For
λ=∑_k=1^K γ_k μ_1^(k),
the mean-field stationary distribution is concentrated on queues of length 1, so we simply have
ν_1^(k)= γ_k, k∈ (1,…,K).
The functions f_i^(k) are discontinuous at ν, so (<ref>) and (<ref>) does not apply. Instead, in the dynamic balance, whenever a queue of length 1 finishes service, a new job will enter immediately. With this, we can write the equivalent of (<ref>) for JIQ:
H_i,j^(k) = 1/μ_j^(k)+H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)),
H_1,j^(k) =1/μ_j^(k) (1≤ j≤ B^(k)-1),
As we can see it is basically equivalent with (<ref>) in this case, because the discontinuity would only affect the arrival rate, and it is multiplied by 0 for every relevant term.
In the mean-field limit, all jobs go to queues of length 0 (which will then stay at length 1 for a positive amount of time), and there are no queues with 2 or more jobs. Accordingly, instead of (<ref>), we have
H=∑_k=1^K μ_1^(k)ν_1^(k)/λ H_1,1^(k).
For the Laplace transforms, we have
H̃_i,j^(k)(s) = μ_j^(k)/s+μ_j^(k)H̃_i-1,j-1^(k)(s), (2≤ i≤ j≤ B^(k)),
H̃_1,j^(k)(s) =μ_j^(k)/s+μ_j^(k) (1≤ j≤ B^(k)-1),
and
H̃(s)=∑_k=1^K μ_1^(k)ν_1^(k)/λH̃_1,1^(k)(s).
For critical JIQ, in the mean-field limit, there will be no job loss.
§.§.§ JIQ, supercritical case
In case λ>∑_k=1^K γ_k μ_1^(k), there will be no idle queues, so ν_0^(k)=0 for k∈ (1,…,K). We note that f_i^(k) are discontinuous at any point with ∑_k=1^Kν_0^(k)=0 and ∑_k=1^Kν_1^(k)>0; an intuitive explanation of this discontinuity is the following. Whenever a server with a single job finishes service, it will become idle. In the mean-field limit, a job will enter the idle queue instantly, so once again, we do not observe idle queues for any positive amount of time. However, similar to the λ=∑_k=1^K γ_k μ_1^(k) case, a positive percentage of all incoming jobs will go to an idle queue. To compute this percentage, we once again observe that in the mean-field stationary distribution, service from queues of length 1 has to be balanced out completely by arrivals to idle queues.
The total service rate in queues of type k of length 1 is μ_1^(k)ν_1^(k), which is thus completely balanced out by an equal amount of arrivals The remaining arrival rate (λ-∑_k=1^K μ_1^(k)ν_1^(k)) is distributed randomly. For longer queues, there are no discontinuities. Accordingly, the dynamic balance equations are
(λ-∑_k=1^K μ_1^(k)ν_1^(k))ν_i^(k) = μ_i+1^(k)ν_i+1^(k), i∈(1,…,B^(k)-1).
The system (<ref>) is nonlinear, but can be solved numerically. Then we can write a modified version of (<ref>) for the calculation of H^(k)_i,j. For this, we introduce
z_0=∑_k=1^K μ_1^(k)ν_1^(k),
dubbed the upkeep, which is the rate of service in servers with queue length 1, balanced out instantly by new arrivals. Essentially, the difference between (<ref>) and the original balance equations (<ref>) is the presence of this upkeep term in the case when the dispatch functions are discontinuous at the mean-field stationary distribution ν.
According to JIQ policy, the remaining arrival rate λ-z_0 is distributed randomly for the rest of the system. Accordingly, (<ref>) becomes
H_i,j^(k) = 1/(λ-z_0)+μ_j^(k)+
(λ-z_0)/(λ-z_0)+μ_j^(k)H_i,j+1^(k)+
μ_j^(k)/(λ-z_0)+μ_j^(k)H_i-1,j-1^(k) (2≤ i≤ j≤ B^(k)-1),
H_i,B^(k)^(k) =1/μ_B^(k)^(k)+H_i-1,B^(k)-1^(k) (2≤ i≤ B^(k)),
H_1,j^(k) =1/(λ-z_0)+μ_j^(k)+(λ-z_0)/(λ-z_0)+μ_j^(k)H_1,j+1^(k) (1≤ j≤ B^(k)-1),
H_1,B^(k)^(k) =1/μ_B^(k)^(k).
To obtain the mean system time H, instead of (<ref>), we now have
H=∑_k=1^Kμ_1^(k)ν_1^(k)/λ H_1,1^(k)+ (1-∑_k=1^Kμ_1^(k)ν_1^(k)/λ)∑_k=1^K∑_j=2^B^(k)ν_j-1^(k)H_j,j^(k)
since ∑_k=1^K μ_1^(k)ν_1^(k)/λ is the portion of the arrival rate that is used to balance out the service in queues of length 1 and the remaining portion of the incoming rate is distributed randomly.
The corresponding equations for the Laplace transforms are
H̃_i,j^(k)(s) = (λ-z_0)+μ_j^(k)/s+(λ-z_0)+μ_j^(k)(
(λ-z_0)/(λ-z_0)+μ_j^(k)H̃_i,j+1^(k)(s)+
μ_j^(k)/(λ-z_0)+μ_j^(k)H̃_i-1,j-1^(k)(s))
(2≤ i≤ j≤ B^(k)-1),
H̃_i,B^(k)^(k)(s) =μ_B^(k)^(k)/s+μ_B^(k)^(k)H̃_i-1,B^(k)-1^(k)(s) (2≤ i≤ B^(k)),
H̃_1,j^(k)(s) =(λ-z_0)+μ_j^(k)/s+(λ-z_0)+μ_j^(k)((λ-z_0)/(λ-z_0)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)/(λ-z_0)+μ_j^(k)) (1≤ j≤ B^(k)-1),
H̃_1,B^(k)^(k)(s) =μ_B^(k)^(k)/s+μ_B^(k)^(k),
and
H̃(s)=∑_k=1^Kμ_1^(k)ν_1^(k)/λH̃_1,1^(k)(s)+ (1-z_0/λ)∑_k=1^K∑_j=2^B^(k)ν_j-1^(k)H̃_j,j^(k)(s).
In general, for the supercritical JIQ case, job loss is possible, and is taken into account by the formula (<ref>).
§.§ Join-Shortest-Queue
For Join-Shortest-Queue (JSQ), incoming jobs are assigned to the shortest queue from among all queues; in case of multiple shortest queues of the same length, one is selected randomly.
For JSQ,
f^(k)_i(x)=
{[ 0 if ∃ i'<i ∃ k': x_i'^(k')>0,; 0 if ∑_k=1^K x_i^(k)=0,; x_i^(k)/∑_k=1^K x_i^(k) otherwise. ].
For the stationary mean-field analysis, let i_0 denote the smallest i for which
∑_k=1^Kγ_kμ^(k)_i≥λ.
Such an i exists if the stability condition (<ref>) holds. Then the mean-field stationary distribution ν will be concentrated on queues of length i_0 and i_0-1: starting from an arbitrary point, queues shorter than i_0-1 will receive the entire load of arrivals, which is larger than they can process, so these queues will “fill up” to level i_0-1, while queues longer than i_0 do not receive any load at all, so these queues will go down, until they reach level i_0.
The upkeep term is very similar to the JIQ case. The total service rate in queues of length (i_0-1) is
z_0=∑_k=1^Kμ^(k)_i_0-1ν^(k)_i_0-1,
which is completely balanced out by an equal amount of arrivals. In case i_0=1, z_0=0, so there is no upkeep, and all queues are of length 0 or 1; in this case, JSQ is equivalent to either subcritical or critical JIQ. When i_0>1, there is an actual upkeep. We assume i_0>1 for the rest of this section.
The remaining arrival rate (λ-z_0) goes to queues of length i_0-1, with the queue type k chosen at random with probabilities proportional to ν^(k)_i_0-1. For each server type k, these arrivals are balanced out by the service in queues of type k and length i_0, leading to the balance equations
μ^(k)_i_0ν^(k)_i_0 =(λ-z_0) ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1 k∈ (1,…,K),
which, along with (<ref>), give a (nonlinear) system of equations for ν, which can be solved numerically.
Whenever a server with queue length i_0-1 finishes service, it will become the single shortest queue and receives a new arrival instantly. Rate (λ-z_0) remains for the rest of the system, which will be directed entirely to queues of length i_0-1. To ease notation, we also introduce
y_0=∑_k=1^Kν^(k)_i_0-1.
Then
H_i,j^(k) = H_i,j+1^(k) (1≤ i≤ j<i_0-1),
H_1,i_0-1^(k) = 1/((λ-z_0)/y_0) + μ_i_0-1^(k) +
(λ-z_0)/y_0/((λ-z_0)/y_0) + μ_i_0-1^(k) H_1,i_0,
H_i,i_0-1^(k) = 1/((λ-z_0)/y_0) + μ_i_0-1^(k) +
(λ-z_0)/y_0/((λ-z_0)/y_0) + μ_i_0-1^(k) H_i,i_0 +
μ_i_0-1^(k)/((λ-z_0)/y_0) + μ_i_0-1^(k) H_i-1,i_0-2 (2≤ i ≤ i_0-1),
H_1,j^(k) =1/μ_j^(k) (i_0-1< j≤ B^(k)),
H_i,j^(k) =1/μ_j^(k)+H_i-1,j-1 (i_0-1< j≤ B^(k), 1≤ i ≤ j).
The first equation in (<ref>) addresses the fact that if a server has fewer than i_0-1 jobs in it, it will immediately fill up to i_0-1 jobs. We also adjust the effective arrival rate to λ -z_0, similarly to JIQ. If i_0=1, the f_i^(k) are continuous at ν, so we can use (<ref>) instead of (<ref>). If i_0=2, there will of course not be any equation with the condition (2≤ i ≤ i_0-1).
If the functions f_i^(k) are continuous at ν, we can use (<ref>) to calculate the mean system time. In case i_0=1, ν is in the inside of a continuous domain of the functions f^(k)_i, so this is the case, and (<ref>) simplifies to
H=∑_k=1^K ν_0^(k)/∑_k=1^K ν_0^(k) H^(k)_1,1.
On the other hand, if i_0>1, the functions f_i are not continuous at ν, and (<ref>) is not applicable; instead, we have
H = ∑_k=1^K μ^(k)_i_0-1ν^(k)_i_0-1/λ H^(k)_i_0-1,i_0-1 +
(1-z_0/λ)
∑_k=1^K ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1H^(k)_i_0,i_0 .
The corresponding equations for the Laplace transforms are
H̃_i,j^(k)(s) = H̃_i,j+1^(k)(s)
(1≤ i≤ j<i_0-1),
H̃_1,i_0-1^(k)(s) = (λ-z_0)/y_0 + μ_i_0-1^(k)/s+(λ-z_0)/y_0+ μ_i_0-1^(k) *
(
μ_i_0-1^(k)/(λ-z_0)/y_0 + μ_i_0-1^(k)+
(λ-z_0)/y_0/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_1,i_0(s))
H̃_i,i_0-1^(k)(s) = (λ-z_0)/y_0 + μ_i_0-1^(k)/s+(λ-z_0)/y_0 + μ_i_0-1^(k) *
((λ-z_0)/y_0/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_i,i_0(s) +
μ_i_0-1^(k)/(λ-z_0)/y_0 + μ_i_0-1^(k)H̃_i-1,i_0-2(s)) (2≤ i ≤ i_0-1)
H̃_1,j^(k)(s) =μ_j^(k)/s+μ_j^(k) (i_0-1< j≤ B^(k)),
H̃_i,j^(k)(s) =μ_j^(k)/s+μ_j^(k)*H̃_i-1,j-1(s) (i_0-1< j≤ B^(k), 1≤ i ≤ j),
and
H̃(s) = ∑_k=1^K μ^(k)_i_0-1ν^(k)_i_0-1/λH̃^(k)_i_0-1,i_0-1(s) +
(1-z_0/λ)
∑_k=1^K ν^(k)_i_0-1/∑_k=1^K ν^(k)_i_0-1H̃^(k)_i_0,i_0(s).
Since y_0 and z_0 are straightforward to compute from ν, (<ref>) is still a linear system of equations for H̃_i,j^(k)(s), which is not any more difficult to solve than (<ref>).
For JSQ, there is no job loss in the mean-field limit. (We emphasize that this is due to the stability condition (<ref>), which we assume in all cases.)
§.§ Join-Shortest-Queue(d)
JSQ(d) is a version of JSQ where the dispatcher first selects d servers randomly, and dispatches the incoming job to the shortest from among the d queues.
If we set d=1, we get Random assignment, and if we set d=N, we get JSQ. The f_i^(k) functions are continuous for any finite d. Appendix <ref> addresses the case d→∞.
For JSQ(d), we introduce the auxiliary variables
y_i^(k),N=∑_j=i^B^(k)x_j^(k),N, z_i^N=∑_k=1^K y_i^(k),N,
and then inclusion-exclusion shows
f^(k),N_i(x^N)=
x_i^(k),N/∑_k=1^K x_i^(k),N×
[z_i^N(z_i^N-1/N)…(z_i^N-d-1/N)
-z_i+1^N(z_i+1^N-1/N)…(z_i+1^N-d-1/N)].
The above version of f^N_i(.) is N-dependent, but converges to
f_i^(k)(x)=x_i^(k)/∑_k=1^K x_i^(k)((z_i)^d-(z_i+1)^d).
Due to the dependency on N, we refer to <cit.>, where this type of dependence on N is allowed. Also, both f_i^(k),N and f_i^(k) are continuous. Overall, the conclusions of Theorems <ref> and <ref> apply.
The mean-field balance equations are
λν_i^(k)/∑_k=1^K ν_i^(k)((∑_k=1^K∑_j=i^B^(k)ν_j^(k))^d-(∑_k=1^K∑_j=i+1^B^(k)ν_j^(k))^d)
=μ_i^(k)ν_i^(k).
Since the rates f_i^(k) are continuous, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution.
Job loss is possible for JSQ(d), but will be typically small enough to be negligible in practice.
§.§ Join-Below-Threshold
Join-Below-Threshold (JBT) sets a threshold M_k which may depend on the server type k; servers of type k with queue length <M_k are considered available and servers of type k with queue length ≥ M_k are full. Tasks will be dispatched to a random available servers. If there are no available servers, jobs will be dispatched at random among all servers.
JBT is commonly used in accordance with limited processor sharing (LPS) for servers which can serve multiple jobs simultaneously in an efficient manner. This is reflected in an increasing service rate curve μ_i^(k). If μ^(k)_i would start to decrease for large i, this is countered by setting the threshold M_k at the maximum point. M_k is referred to as the multi programming level (MPL), and is the number of jobs served simultaneously in a single server, while further jobs wait in queue. Overall, this setup ensures the service rate curve μ^(k)_i is increasing up to M_k and constant for M_k≤ i≤ B^(k).
If we set the threshold to 1, we get the JIQ principle, and if we set it to B^(k), we get Random assignment.
We introduce the auxiliary variable
y= ∑_k=1^K∑_j=0^M_k-1x^(k)_j,
which is the ratio of available servers.
For JBT,
f_i^(k)(x)=
{[ 0 if y>0, i≥ M_k,; x^(k)_i/y if y>0, i<M_k,; x^(k)_i if y=0. ].
The mean-field balance equations are
μ^(k)_i ν^(k)_i =λν_i-1^(k)/y, i∈{1,…,M_k-1} , k∈{ 1,…,K },
with ν_i^(k)=0 for i>M_k.
For a full, detailed mean-field analysis of JBT, we refer to <cit.>. Apart from the stability condition (<ref>) and monotonicity condition (<ref>), it is usually also assumed that
λ<∑_k=1^K γ_kμ_M_k,
which is a stability condition stronger than (<ref>), ensuring that the evolution of the transient mean-field limit eventually enters and then never leaves the region where no queues are longer than the threshold. On this domain, the functions f_i^(k) are continuous, and the mean-field stationary solution ν is unique and also inside this domain. An efficient numerical method to compute ν is provided in <cit.>.
As a side note, <cit.> also shows examples where (<ref>) does not hold, and there are multiple attractors in the mean-field system corresponding to quasi-stationary states of a system with a finite N, and mean-field convergence fails completely.
If (<ref>) and (<ref>) hold, (<ref>) and (<ref>) can be used to compute the mean system time H, and (<ref>) and (<ref>) can be used to compute the Laplace-transform of the pdf of the system time distribution.
Job loss is not possible for JBT.
§ NUMERICAL EXPERIMENTS
We conducted several numerical experiments. These are by no means exhaustive, but should nevertheless display some interesting properties and allow for some numerical comparison of the various load balancing methods.
For several parameter setups, we examined simulations for various choices of N, and also computed the mean-field limit (N=∞). Simulations were done in Python and symbolic computations were done in Wolfram Mathematica. The codes for both are available at <cit.>. For the symbolic calculations, numerical inverse Laplace transform was used, for which packages are available at <cit.>.
Section <ref> displays transient mean-field convergence as N is increased. Also, as t is increased, each system will converge to its stationary state.
Section <ref> compares the mean service times for both simulations and the mean-field settings.
Section <ref> addresses service time distributions.
§.§ Homogeneous transient mean-field diagrams
In this section, we plot the solutions of the mean-field equations as well as the corresponding x_i^(k),N curves for systems with N=1000 and N=10000 servers, resulting from simulations.
We will focus on homogeneous clusters with K=1 (also dropping (k) from the notation). B=B^(k), the maximal queue length will be set to 10. The rest of the parameter setup is shown in Table <ref>. The parameter setup adheres to the monotonicity assumption (<ref>) and also the stability condition (<ref>) (in fact, the system load can be computed as λ/μ_B in a homogeneous cluster).
Figures <ref>–<ref> display simulation results for the transient evolution of the homogeneous system using various load balancing policies. For each load balancing policy, two plots are included: the number of servers is N=1000 for the plot on the left and N=10000 for the plot on the right. Other system parameters are according to Table <ref>. All systems are initially empty. The x axis is time, and the jagged line graphs show the ratio of servers with queue length 0 to 10 respectively. These have some natural fluctuations. Also included are the transient mean-field limits, which are smooth curves.
§.§.§ Random
Figure <ref> displays the transient evolution with Random load balancing policy. A significant ratio of queues is longer throughout; overall, the Random load balancing principle is rather inefficient, and serves mostly as a baseline. Later we will see the effect of more efficient load balancing principles on the same systems.
The fluctuations of the simulations decrease as N is increased. Actually, as mentioned after Theorem <ref>, the fluctuations are guaranteed to be of order 1/√(N) for x^N (or, equivalently, order √(N) for X^N). However, the constant factor can be different for the various load balancing principles. For Random assignment, the fluctuations are relatively mild.
Convergence to stationarity can also be observed: as time increases, the smooth graphs converge to the mean-field stationary distribution. That said, for any fixed finite N, the order of the fluctuations will not go to 0 as time is increased.
§.§.§ JIQ
Figure <ref> displays the transient evolution with JIQ load balancing policy for λ=0.95 and λ=1.25.
Figures <ref> and <ref> have λ=0.95 (with other parameters according to Table <ref>), which is subcritical due to λ=0.95<μ_1=1 (see Section <ref>), so the system stabilizes on queues of length 0 and 1.
Figures <ref> and <ref> have λ=1.25>μ_1=1, which is supercritical, so the system starts out by filling up all empty queues in a sharp manner. After this initial period, no empty queues are present anymore, and the dynamic dispatch is distributed among queues of length 1 through 10 randomly. Similar to Random policy, once again longer queues are present in the system.
§.§.§ JSQ(2) and JSQ(5)
Figure <ref> displays the transient evolution with JSQ(2) load balancing policy. Already for d=2, the result is markedly different from Random assignment. This is a known phenomenon, referred to as power-of-2 <cit.>. The ratio of longer queues diminishes more rapidly with the queue length than for either Random or JIQ policy.
Figure <ref> displays the transient evolution with JSQ(5) load balancing policy. Here, most of the queues will be of length 3 and 4, with the ratio of either shorter or longer queues much smaller. We also note that the dispatch function is continuous, so the transient mean-field limit functions are smooth, although they change rather sharply.
§.§.§ JSQ
Figure <ref> displays the transient evolution with JSQ load balancing policy. Here, all of the queues will be of length 3 and 4 after the system fills up. At any point in time, there are only 2 different queue lengths present, starting from lengths 0 and 1, switching to 1 and 2, then 2 and 3, then 3 and 4 as the system fills up. We also note that the dispatch function is discontinuous, so the transient mean-field limit functions has breaking points at switches to new queue length pairs.
The stationary mean-field limit is ν_3=ν_4=0.5 due to
λ=1.25=μ_3+μ_4/2=1.2+1.3/2.
For any finite N, when a job in a queue of minimal length finishes service, a shorter queue will appear for a brief but positive time. In the mean-field limit, such queues are filled back instantly.
We also note that the fluctuations are considerably larger than for either Random or JIQ. An intuitive explanation is that the higher level of control provided by JSQ will generally focus any fluctuations in either the arrival or service on a single queue length: if the arrivals outweigh the service for a short period of time, the surplus arrivals will all go to servers of minimal queue length. Overall, the strict control introduces a positive correlation between the length of the queues, resulting in larger fluctuations (which are, once again, of order 1/√(N), but with a higher constant factor). Principles with less strict control generally distribute this fluctuation among several different queue lengths, resulting in smaller fluctuations.
§.§.§ JBT
Figure <ref> displays the transient evolution with JBT load balancing policy. The MPL parameter is set to 5. In this setup, the system reaches stability before hitting the MPL threshold (and accordingly, the mean-field system reaches its attractor before the discontinuity point, so the functions remain continuous). This is the intended usage of JBT.
§.§ Heterogeneous transient mean-field diagrams
In this section, we plot the solutions of the mean-field equations as well as the corresponding x_i^(k),N curves for systems with N=10000 servers, resulting from simulations.
We will focus on heterogeneous clusters with K=2. B=B^(k), the maximal queue length will be set to 10. The rest of the parameter setup is shown in Table <ref>. The parameter setup adheres to the monotonicity assumption (<ref>) and also the stability condition (<ref>).
The parameter choices in Table <ref> are motivated by an actual real-life scenario: in many shopping centers, there are two types of checkouts: checkouts served by an employee (service rate 1 in Table <ref>), with a separate queue for each such checkout, and self-service checkouts. A single self-service checkout is typically slightly slower (service rate 0.8 in Table <ref>) than a checkout served by an employee, but this is countered by the fact that there is a batch of self-service checkouts for each queue (the batch size is 5 for Table <ref>).
Of course, in actual shopping centers, the number of queues may or may not be high enough to warrant a mean-field approach; that said, as we will see later, some derived performance measures are well-approximated by the mean-field limit already for smaller system sizes.
Figures <ref>–<ref> display simulation results for the transient evolution of the heterogeneous system using various load balancing policies. For each load balancing policy, two plots are included: the ratio of type 1 servers with various queue lengths for the plot on the left and the ratio of type 2 servers with various queue lengths for the plot on the right. Other system parameters are according to Table <ref>. All systems are initially empty. The x axis is time, and the jagged line graphs show the ratio of servers with queue length 0 to 10 respectively. These have some natural fluctuations. Also included are the transient mean-field limits, which are smooth curves.
§.§.§ Random
Figure <ref> displays the transient evolution with Random load balancing policy. A significant ratio of queues is longer throughout; in fact, servers of type 1 are overloaded, as can be seen from the fact that the majority of queues of type 1 has length 10 (equal to the buffer size) or close. In a heterogeneous system, with poor load balancing, it is possible that some server types are overloaded even though the system as a whole is subcritical.
§.§.§ JIQ
Figure <ref> displays the transient evolution with JIQ load balancing policy.
JIQ does not offer a considerable improvement over Random, as once again longer queues are present in the system. This also means that servers of type 1 are overloaded, which also results in significant data loss. On the other hand, servers of type 2 are subcritical.
§.§.§ JSQ(2) and JSQ(5)
Figure <ref> displays the transient evolution with JSQ(2) load balancing policy. Servers of type 1 are still overloaded, in which case JSQ(2) does not offer a considerable improvement over either Random or JIQ. The system (particularly servers of type 1) goes through an initial build-up period, starting from empty and converging to stationarity with the majority of queues full (length equal to buffer size 10) or close.
Figure <ref> displays the transient evolution with JSQ(5) load balancing policy. In this case, the better load balancing results in both server types being subcritical; for server type 1, the typical queue lengths are 5 and 6, while for server type 2, the typical queue lengths are 4 and 5. Data loss is practically negligible in this case.
§.§.§ JSQ
Figure <ref> displays the transient evolution with JSQ load balancing policy. The build-up period is much sharper (in fact, the mean-field limit curves are nondifferentiable at the changes in minimal queue length), with both server types eventually reaching a state where all queue lengths are either 4 or 5. Fluctuations around the mean-field limit are relatively mild for N=10000 servers.
§.§.§ JBT
Figure <ref> displays the transient evolution with JBT load balancing policy. MPL parameters are 1 for server type 1 and 5 for server type 2. JBT load balancing policy suits the type of heterogeneous system described by Table <ref> particularly well: the MPL settings allow to fully utilize the service capacity of each server type without allowing queues longer than necessary. In fact, JBT can outperform JSQ for heterogeneous systems, as we will see in the next section.
§.§ Mean system times
The main performance measure we are going to examine is the mean system time, that is, the average time a job spends between arrival and finishing service.
First we examine the homogeneous system described by the parameter settings in Table <ref> for simulations for various system sizes ranging from N=10 to N=10000 and also the mean-field limit, with the various load balancing principles from Section <ref>. Table <ref> lists the mean system times from both simulations, and calculated from the mean-field limit using equations (<ref>) and (<ref>) (or in the discontinuous cases, their corresponding versions listed in Section <ref>). We note that despite long running times, the simulation results still may have an inherent small random variation.
JSQ is the most effective principle, which is unsurprising (although we do emphasize that in practice, JSQ comes with a heavy overhead communication burden which was not modelled here).
JSQ(d) is more effective with a higher d, but already for d=2, it is significantly better than Random, which is once again known as the power-of-2 (or power-of-d) <cit.>.
We note that jobs lost are not included in the averages in Table <ref>; in order to give a more complete picture, we mention that the theoretical job loss probability for Random policy (with the same parameters as per Table <ref>) is 0.0438, and for JIQ it is 0.0136 (for JSQ(2), JSQ(5), JSQ and JBT, job loss is negligible). Job loss probabilities for the simulations are not included in the paper, we just mention that they closely match the theoretical values.
Overall, based on Table <ref>, the mean-field approximation for the mean system times is exceedingly accurate already for small values of N.
Next we address the heterogeneous system described by the parameter settings in Table <ref>.
As long as N is finite, there are fluctuations which do not vanish even as time increases and the systems converge to their stationary limit. As expected, fluctuations are bigger for smaller values of N. For smaller values of N, the mean system time is generally above the mean-field mean system time; an intuitive explanation for this is that the limited number of servers offers less `room' to balance out short periods of overflow (coming from the natural fluctuations of arrivals and service), causing the system to operate with longer queues for said short periods.
Once again, in order to compare the mean system time for the various load balancing principles, it is important to take into account that some of these principles operate with significant data loss: for random, the theoretical job loss probability is 0.285, for JIQ, it is 0.251, and for JSQ(2), it is 0.104.
Table <ref> shows that, similar to the homogeneous case (Table <ref>), the mean-field approximation for the mean system times is very accurate for both smaller and larger choices of N (and for JSQ(5), JSQ and JBT, job loss is negligible). The only exception is JBT for N=12; for very small system sizes and system load close to critical (1.6/1.75 according to the parameters in Table <ref>), even a small burst in the arrivals can push the entire system over the threshold, at which point it switches to Random, and stays there for significant periods of time.
§.§ System time distributions
In this section we examine the theoretical probability density function of the system time in the mean-field limit for some setups and compare it with empirical distributions (histograms) from simulations for finite N.
The theoretical distributions are calculated using equations (<ref>) and (<ref>) (or in discontinuous cases their counterparts described in Section <ref>), and inverse Laplace transformation (ILT). The system (<ref>) can be solved explicitly, and the solution is a rational function (in the Laplace transform domain).
However, depending on the value of K and B^(1),…,B^(K), the solution for H̃(s) from (<ref>) can be infeasible already for moderately large values of K and B. In general, the formula for H̃(s) is relatively simple if only few of the H̃_i,j^(k)'s are nonzero, which is typically the case for JSQ. For other load balancing principles, where all H̃_i,j^(k)'s are nonzero, the explicit formula for H̃(s) from (<ref>) is infeasible already for K=2 and B^(1)=B^(2)=10.
Due to this, the parameters for this setup were the homogeneous system from Table <ref> with λ=1.25. We also set B=5, to make the ILT less complicated. Just as an example, for JSQ, with the above parameters, we have
H̃(s)=(24 s+65)^4/5 (2 s+5)^3 (10 s+13)^4.
H̃(s) can be computed for the other load balancing principles as well, but the explicit formulas are far more complicated, and are omitted from the paper.
Figure <ref> displays the theoretical pdf of the system time in the mean-field limit with a red curve, while the blue histograms are from simulations with N=1000 servers. Each system was run long enough to reach the stationary regime, and only jobs arriving during this period were considered. The theoretical pdf's are normalized as per (<ref>).
In general, all histograms match the theoretical pdf's well. For random assignment and JIQ (which is supercritical with the given parameters), the system time is less concentrated (e.g. it has a higher variance). JSQ is the only one where the system time density is 0 at time t=0; for all other load balancing principles, it is possible that a job starts service immediately, which corresponds to a positive density at t=0. For JSQ(2) and JSQ(5), the match between the theoretical and numerical distributions is slightly less perfect than for others (although still very good); the exact reason for this is subject to further research.
§ CONCLUSION AND OUTLOOK
In this paper we examined the mean-field transient and stationary convergence of systems with several different load-balancing principles based on queue length.
While no rigorous proof was presented, the simulations suggest that mean-field convergence holds even for discontinuous f_i^(k) dispatch functions. We have provided formulas to compute the stationary mean-field limit, and also the mean system time in the mean-field stationary regime. In addition to that, the entire service time distribution could also be calculated with the help of the Laplace transform, adapting (<ref>) and (<ref>) for the Laplace transforms of the system times. We have also examined the mean system time numerically for several parameter setups.
There is a lot of possibility for further work in this topic. One direction would be to provide mathematically rigorous proofs for versions of Theorems <ref> and <ref> for some of the discussed systems with discontinuous dispatch functions.
Another direction is scenarios where further information is available (e.g. job size); in such cases, that information can be used to estimate the load of each queue more precisely and design other load balancing principles.
Yet another direction is to add a geometrical dimension to the server cluster, with the load balancing principle taking into account the distance of the arriving job to the queues (e.g. as in a shopping center, where customers are more likely to choose a queue physically closer to their arrival point).
We could also make the model more realistic, even if more complicated, by considering the dispatcher's communication overhead cost. However, we expect the communication overhead cost to be highly dependent on actual system settings, and as such, it seems difficult to incorporate it in a high level model in a general manner.
Another direction is to allow different job types, with certain job types can be served more efficiently by certain server types.
All in all, this is a vast topic that has a lot of potential for further development.
abbrv
§ LITTLE'S LAW
In a heterogeneous system, Little's law applies to the entire system in the mean-field stationary regime, and also applies to each server type separately. It is valid regardless if the dispatch functions are continuous or not, but requires some consideration for discontinuous dispatch functions. In this section, we provide the proper formulas for each load balancing principle.
Let λ^(k) denote the effective arrival rate to servers of type k, and L^(k) denote the average queue length in servers of type k (k=1,…, K). Using these, we can compute the mean system time for a job in a server of type k via Little's law as
H^(k)=L^(k)/λ^(k).
For any load balancing principle,
L^(k)=∑_i=0^B^(k) iν_i^(k)/∑_i=0^B^(k)ν_i^(k).
The formula for λ^(k) is different for continuous and discontinuous dispatch functions. For dispatch functions continuous at ν (this case includes Random, JSQ(d), JBT and also subcritical JIQ and JSQ with i_0=1), the formula for λ^(k) is
λ^(k)=λ∑_i=0^B^(k)-1f_i^(k)(ν)/∑_i=0^B^(k)ν_i^(k).
For supercritical JIQ, we have
λ^(k)=μ_1^(k)ν_1^(k)+(λ-z_0)
∑_i=1^B^(k)-1ν_i^(k)/∑_i=0^B^(k)ν_i^(k),
and for JSQ with i_0>1, we have
λ^(k)=μ_i_0-1^(k)ν_i_0-1^(k)+(λ-z_0)
ν_i_0-1^(k)/∑_k=1^Kν_i_0-1^(k)/ν_i_0-1^(k)+ν_i_0^(k).
§ SYSTEM TIME DISTRIBUTION FOR LPS SERVICE PRINCIPLE
This section is a counterpart of Section <ref>; we provide formulas to compute the system time distribution for limited processor sharing (LPS) service principle.
For LPS, each server type has a parameter called the multi-programming level (MPL); the server can serve a number of jobs up to the MPL simultaneously, dividing its service capacity evenly, while further jobs wait in a FIFO queue.
Once again, let h_i,j^(k)(t) denote the probability density function of the remaining system time of a job at position i in a queue of length j and queue type k. M^(k) denotes the multi-programming level of queues of type k. The order of jobs is irrelevant among jobs already in service; that is, for fixed k and j, h_i,j^(k)(t) is constant for i≤min(j,M^(k)). Accordingly, in the formulas we will write h_1,j^(k)(t) instead of h_i,j^(k)(t) for i≤min(j,M^(k)). For jobs that are not yet in service (i> M^(k)), their position within the queue is still relevant.
For LPS, when the tagged job is in service, three type of changes can occur to its queue: arrival, or the tagged job finishes service, or another job finishes service. In the last case, it does not matter whether the finished job is ahead or behind the tagged job. When the tagged job is not yet in service, only two type of changes can occur: arrival, or another job finishes service. We also use once again that arrival is not possible when the queue is full (j=B^(k)), that is, f_B^(k)^(k)(.)≡ 0 for k=1,…,K.
The corresponding version of (<ref>) is as follows:
H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)(M^(k)-1)/M^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)+
μ_j^(k)/M^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k))
(1≤ i≤ M^(k)≤ j≤ B^(k)),
H̃_1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j+1^(k)(s)+
μ_j^(k)(j-1)/j/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s)+
μ_j^(k)/j/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k))
(1≤ i ≤ j< M^(k)),
H̃_M^(k)+1,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_M^(k)+1,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_1,j-1^(k)(s))
( j≤ B^(k)),
H̃_i,j^(k)(s) = λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)/s+λ f_j^(k)(ν)/ν_j^(k)+μ_j^(k)(
λf_j^(k)(ν)/ν_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i,j+1^(k)(s)+
μ_j^(k)/λf_j^(k)(ν)/ν_j^(k)+μ_j^(k)H̃_i-1,j-1^(k)(s))
(M^(k)+1<i≤ j≤ B^(k)).
Once again, (<ref>) and (<ref>) are applicable to compute H̃(s) when the dispatch functions f_i^(k) are continuous at ν. In other cases, the formulas may need to be modified.
§ PARTIAL CONTROL
We highlight a situation dubbed partial control. In such a system, some of the jobs are not subject to the load balancing policy, and will simply be dispatched randomly. A real life example for partial control would be directing traffic via cooperating navigation apps in cars: each car with a cooperating navigation app is subject to load balancing, but drivers without the app select routes not subject to the same load balancing.
Assume we have a system with a load balancing policy corresponding to some dispatch functions f_i^(k)(x). Load balancing only has partial control: for each job, with some fixed probability 0<p≤ 1, the job will be dispatched according to the load balancing policy, but with probability (1-p), it will be dispatched randomly. In this case, the corresponding dispatch functions are simply
f̂_i^(k)(x) = p f_i^(k)(x) + (1-p)x_i^(k).
Figure <ref> shows transient plots with JSQ load balancing principle with low (p=0.3) and high (p=0.8) levels of control. System parameters are according to Table <ref> with λ = 1.25 and N=10000. With a low level of control, the transient behaviour is closer to the case of random assignment, with longer queues also present. For low control, the minimal stationary queue length is 2, lower than the minimal stationary queue length 3 in case of full control JSQ, as the system needs to balance fewer controlled jobs (e.g. the upkeep is lower). For high control (p=0.8), the minimal stationary queue length remains 3, but once again, longer queues are also present.
§ CONVERGENCE OF JSQ(D) TO JSQ AS D→∞
This section shows an interesting visualisation of JSQ(d)'s “convergence” to JSQ as d→∞. Figure <ref> displays the solutions of the transient mean-field equations for various choices of d. In practice, JSQ(d) is quite close to JSQ already for moderately large values of d.
We note that the mean-field transient solutions are smooth for JSQ(d) for any choice of d, but not for JSQ.
|
http://arxiv.org/abs/2307.06106v1 | 20230711151158 | The Impact of Process Complexity on Process Performance: A Study using Event Log Data | [
"Maxim Vidgof",
"Bastian Wurm",
"Jan Mendling"
] | cs.OH | [
"cs.OH"
] |
The Impact of Process Complexity on Process Performance
M. Vidgof et al.
Wirtschaftsuniversität Wien, Welthandelsplatz 1, 1020 Vienna, Austria
[email protected]
LMU Munich School of Management, Ludwigstrasse 28, 80539 Munich, Germany
[email protected]
Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany
[email protected]
Weizenbaum Institute, Hardenbergstraße 32, 10623 Berlin, Germany
The Impact of Process Complexity on Process Performance: A Study using Event Log Data
Maxim Vidgof 10000-0003-2394-2247 Bastian Wurm20000-0002-1002-5397 Jan Mendling1,3,40000-0002-7260-524XThe research by Jan Mendling was supported by the Einstein Foundation Berlin under grant EPP-2019-524 and by the German Federal Ministry of Education and Research under grant 16DII133.
===================================================================================================================================================================================================================================================================================================
Complexity is an important characteristic of any business process. The key assumption of much research in Business Process Management is that process complexity has a negative impact on process performance. So far, behavioral studies have measured complexity based on the perception of process stakeholders. The aim of this study is to investigate if such a connection can be supported based on the analysis of event log data. To do so, we employ a set of 38 metrics that capture different dimensions of process complexity. We use these metrics to build various regression models that explain process performance in terms of throughput time. We find that process complexity as captured in event logs explains the throughput time of process executions to a considerable extent, with the respective R-squared reaching up to 0.96. Our study offers implications for empirical research on process performance and can serve as a toolbox for practitioners.
§ INTRODUCTION
Business processes management (BPM) provides various analysis techniques for improving the performance of business processes (see, for example, <cit.>). Several of these techniques support the identification of root causes behind performance issues of a process. Some studies have pointed to the connection between process complexity as a root cause of bad process performance. More specifically, it has been established that standardized business processes are connected with better process performance <cit.> and outsourcing success <cit.>. For this reason, high business process complexity is often a motivation for business process redesign initiatives <cit.>, but also a challenge for standardization efforts <cit.>.
However, these studies
largely build on perceptual measures, which entails at least three key issues. First, such measures require specific attention in order to meet potential validity concerns <cit.>. Second, perceptual differences exist along the organizational hierarchy. The so-called hierarchical erosion effect states that perceptions become less favourable towards the lower levels of the hierarchy <cit.>. Third, a study based on perceptual measures is often restricted to an observation at only a single point in time. All of this raises the question to which extent a more precise investigation of the connection between process complexity and performance is possible.
In this paper, we address this research problem. To this end, we utilize available event log dataset to a) calculate complexity measures and b) throughput time as a performance measure over different time windows. The connection between complexity and performance is then investigated by means of statistical regression.
Our results suggest that process complexity is closely connected to throughput time, but also dependent on idiosyncratic factors. We discuss implications of this finding for research and practice.
The remainder of the paper is structured as follows. Section <ref> introduces complexity metrics and the related concepts. Section <ref> presents our approach for the calculation of process complexity, throughput time, and the creation of our statistical models. Section <ref> presents our results and showcases the best statistical models. Section <ref> provides the discussion of the results and points to avenues for future research. Finally, Section <ref> concludes with a summary.
§ BACKGROUND
In this section, we discuss the background against which we position our work. First, we summarize related work on process complexity. Second, we outline research on process performance and the role that complexity plays for it.
§.§ Process Complexity
In BPM research, process complexity has often been approached from a process model perspective. Most notably is the work by Mendling on the relationship between process model complexity and error probability <cit.>. Recently, various metrics for complexity based on event logs have been defined, partially inspired by work in neighboring disciplines, such as organization science. These measures can be used to quantify different aspects of business processes complexity that are visible from event log data. The various measures can be organized in five categories as presented in Table <ref>.
The first category encompasses measures pertaining to the size of a given event log. These measures count properties of an event log, such as the number of events, sequences, and minimum, average, and maximum sequence length <cit.>.
The second category contains measures capturing the variation of process behavior as documented in the event log. Many of the measures in this category build on a transition matrix that is derived based on the directly-followed relations as captured in the event log <cit.>. Pentland et al. <cit.> operationalize process complexity as the number of acyclic paths provided by the transition matrix. Closely related to this is the measure by Hærem et al. <cit.>, who measure complexity as the number of ties, i.e. directly-follows relations, over all distinct sequences. Further measures that depict variation are Pentland's <cit.> approach to compress an event log based on the Lempel-Ziv algorithm as well as the absolute and relative number of unique sequences <cit.> contained in an event log.
The third category includes measures that are based on different notions of distance <cit.>. Günther <cit.> suggests a measure of affinity of two event sequences, capturing the extent to which directly-follow relations of the sequences overlap. His average affinity measure calculates the mean of the pair-wise affinity over all sequences in the event log <cit.>. This measure is similar to Pentland's <cit.> deviation from random of the transition matrix. Pentland <cit.> further proposes average edit distance between event sequences based on optimal matching <cit.>.
The fourth category of measures builds on graph entropy and has been recently proposed by Augusto et al. <cit.>. They distinguish between measures for sequence and variant entropy of an event log. Additionally, they suggest that each of the measures can be normalized to take a value between 0 and 1.
We refer to these measures as simple entropy measures.
Fifth, the measures by <cit.> have been extended beyond the control flow to incorporate data variety <cit.>. In contrast to the simple entropy measures, we refer to this class of measures as enriched entropy measures.
Several of these measures have been applied to study an increasing breadth of research problems. The above named study by Augusto et al. <cit.>, for example, investigated the influence of process complexity on the quality of process models derived from event log data. They find that process complexity is negatively correlated with the quality of discovered process models. Thus, the more complex the event log, the poorer will be the model discovered by process mining algorithms. Importantly, different discovery algorithms are more sensitive to certain complexity measures than others <cit.>.
There are also some behavioral studies that examine how process complexity changes over time. Pentland et al. <cit.> simulate how process complexity changes over time. They find that organizational processes undergo different phases of process complexity. At the initiation of their simulation, processes exhibit low levels of complexity. After several iterations of the simulation, process complexity suddenly sharply increases, leading to bursts of complexity. Afterwards, complexity again decreases resulting in limited but ongoing variation in the process. Further, Wurm et al. <cit.> investigate process complexity in the Purchase-to-Pay and Order-to-Cash processes of a multinational enterprise. While they find that process complexity changes continuously, they do not find any indication for sudden bursts of complexity in the examined processes.
Importantly, both studies <cit.> rest on measures that are not precise <cit.>. As shown in <cit.>, corner cases can be identified that illustrate that the measures used tend to overestimate the actual complexity of a process.
§.§ Process Performance
The literature suggests a clear link between process complexity and process performance. Empirical studies indicate that the standardization of business processes ultimately leads to better process performance <cit.> and outsourcing success <cit.>. In particular, Münstermann et al. <cit.> have found that process standardization is positively associated with different process performance dimensions, such as process time, cost, and quality. By means of standardization, organizations aim to reduce process complexity, i.e. the number of ways that a process can be performed <cit.>.
At a second look, however, the relationship between process complexity and process performance is not that clear-cut. Detailed findings by Münstermann et al. <cit.> show that the effect of standardization is conditional to the industry and type of process in question. Specifically, they find that process standardization only significantly influences process performance in the service industry and for companies that can be classified as analyzers <cit.>.
Furthermore, studies that measure the complexity of processes and their corresponding performance rely on perceptual measures that can cause several important validity issues. First, there are important validity concerns that need to be taken into account when developing perceptual measures for organizational and process performance <cit.>. Second, there are perceptual differences that need to be considered when interpreting results from perceptual measures. For example, the hierarchical erosion effect <cit.> describes that perceptions at lower levels of an organization's hierarchy tend to be less favourable. Similarly, Pentland <cit.> shows that process stakeholders' perception and actual enactment of process variation diverge substantially. Third, the use of perceptional measures to determine changes in properties of business processes is inefficient. In order to assess the effect of an improvement initiative on process performance, one would have to survey process stakeholders again and again. For example, to assess the success of a standardization initiative, a company would have to survey process stakeholders at least twice: prior to the initiative and after the initiative. Thus, studies based on perceptual measures are often restricted to data that is collected at a single point in time and only provide a static perspective of processes and their performance.
In light of these limitations, several authors have proposed to use process mining to move from opinion-based to evidence-based measures for business processes <cit.>. In this regard, the studies by <cit.>,<cit.> are the first to define measurability of process performance indicators and evaluate process redesign best practices based on event logs, respectively.
In the following, we develop an evidence-based and time-sensitive measure for process complexity based on the recent work by <cit.>. As this measure is based on graph entropy it is precise and allows researchers and process managers to quantify process complexity in a comprehensive way at any given point in time. In addition, it allows to continuously monitor how process complexity changes over time.
We further evaluate the measure by applying it to a set of event logs from the Business Process Intelligence (BPI) Challenge allowing us to closely examine the relationship between process complexity and process performance.
§ APPROACH
In this section, we describe our approach. First, we introduce the notion of forgetting that allows us to weight events in the event log differently based on their time of occurrence. Then, we prepare the dataset by splitting it into time periods, performing complexity and performance measurements as well as filtering out the outliers. Then, we automatically build regression models and systematically reduce the number of variables in them.
All computations were performed on a laptop with Intel®Core ™ i7-8565U CPU @ 4.60 GHz x 4 and 16 GB of DDR4 RAM, Linux kernel 4.15.0-88-generic 64-bit version, Python version 3.8.10 and R version 4.0.3. The code for complexity and performance measurement as well as the data are available on GitHub[<https://github.com/MaxVidgof/process-complexity>][<https://github.com/MaxVidgof/complexity-data>].
§.§ Forgetting
An important concern for prediction models is the potential evolution of the data-generation mechanism over time <cit.>. The available measures described in Section <ref>, such as sequence entropy and normalized sequence entropy, rely on counting the events distributed over different partitions of an automaton. However, they weight all events equally.
This can lead to undesirably high influence of older complex execution paths on current complexity measurements.
Here, we consider the idea of forgetting, which means the events that happen earlier should add less to the sequence entropy than more recent ones. To this end,
we assign a weight to each event based on its timestamp, or, to be more precise, based on the time difference between each event and the most recent event in the log. Thus, the older the event, the more it will be discounted.
There are two ways of doing so. The first, naïve way, is calculating the weight linearly as in Formula <ref>. Thus, this method is called linear forgetting. Sequence entropy with linear forgetting can be then computed similarly to the original sequence entropy by summing up the weights of the events instead of counting them.
w_l(e) = 1 - ts_max - ts(e)/ts_max-ts_min
While linear forgetting provides a first glimpse of how forgetting can be incorporated into sequence entropy, it has a number of problems, all of which are connected to the weight assignment. First, the weight of the earliest observed event is 0, meaning the contribution of this event to process complexity is disregarded. This is an inadequate solution. Second, it implies a linear nature of forgetting itself, which does not reflect reality closely enough.
Thus, we introduce a more advanced method – exponential forgetting. It is similar to the first method, the only difference being a slightly more complex weighting Formula <ref>.
w_e(e) = exp(-kts_max-ts(e)/ts_max-ts_min)
With such weighting, the weight of the most recent event is 1 and earlier events have decreasing weights that never reach 0. In addition, the forgetting coefficient k>0 is introduced. It enables further control over the contribution of the older events. The larger the coefficient, the less the weight of the event. The coefficient is considered to be 1 by default and in this paper we proceed with this default value. If it is desired to decrease the weight of older events even more, a larger coefficient k>1 can be set. In the opposite case, one should use 0<k<1.
§.§ Data preparation
Our dataset comprises 14 publicly available real-life event logs from Business Process Intelligence Challenge (BPIC) <cit.>. In order to use them for statistical analysis, we apply the following procedure. First, we split each event log into time periods. Then, we measure process performance and complexity for each period. Afterwards, we create a merged dataset with all event logs and add an industry label to specify which industry the process belongs to. Finally, we remove the outliers. In this section, we describe these steps in more detail.
Time periods. We start by splitting the event logs into time periods. First, we extract the minimum and maximum timestamp of the log events. We then set the month of the earliest event to be the starting period and the month of the last event to be the end period. Afterwards, we split the event log into months using an intersecting filter, as outlined in <cit.>. The filter assigns all traces to a period that started before or during the period and ended during or after the period. In other words, it selects all active traces in a given period. The choice of this filter further entails that a trace can be assigned to multiple periods.
Theoretically, it is possible to choose any granularity level at this point. I.e., we could choose shorter time periods like weeks or even days, but also longer ones like years. In any case, the choice of the the time interval mostly depends on case duration, e.g. setting time periods as granular as weeks if a process instance takes half a year on average will only increase the amount of data points without providing any additional value.
Complexity measurement. For each time period, we measure the complexity of the process, treating traces in every period as separate event logs. We use all metrics defined in Section <ref> and presented in Table <ref>. Furthermore, we add forgetting to both simple and enriched sequence entropy, as outlined in Section <ref>. Note that when calculating entropy metrics with forgetting, only a log partition (one month in our case) is considered, the minimal and maximal timestamp refer to the first and last event in this partition, not in the entire event log. It is also worth noting that from this point on we treat entropy metrics (both simple and enriched) as one group, which will be important in future steps.
In addition, we also measure a set of what we called generic metrics. These are the metrics that can be measured out of the box by PM4Py[<https://pm4py.fit.fraunhofer.de/>] and include number of cases, number of activity repetitions, among others. We calculate a total of 38 complexity measurements for each time period.
Performance measurement. As already discussed, there are various ways to assess process performance, including time and cost dimensions. However, most publicly available event logs lack cost data, as is the case for the event logs we chose for analysis. We thus focus on measuring process time as an indicator for process performance. More specifically, we examine the throughput time of the respective processes. While using cycle time could potentially be more insightful, many event logs do not contain information about starting timestamps of activities, thus making it impossible to calculate cycle time. Throughput time, in contrast, can be easily calculated for any event log.
For each time period, we calculate the median throughput time of all traces in that period. While average throughput time might seem a more intuitive measure, median throughput time is more robust.
Combined dataset. After calculating the measurements for all logs, we combine them in a single dataset. In this dataset, each row consists of the originating event log, time period and corresponding measurements.
We also want to control for industry-specific process characteristics that may influence performance but will not be captured by the complexity metrics.
Thus, we also introduce the variable industry that specifies which industry the event log belongs to according to the SIC division [<https://en.wikipedia.org/wiki/Standard_Industrial_Classification>].
Following the classification, we assigned BPIC 2011 <cit.> to healthcare, BPIC 2015 <cit.> and 2018 <cit.> to public administration, BPIC 2017 <cit.> to finance, BPIC 2019 <cit.> to manufacturing, and BPIC 2020 <cit.> to education.
Outlier removal. The last step of our data preparation procedure is the removal of outlier periods. At the beginning and at the end of each event log, there are periods that contain considerably less traces than the rest of the log. Our assumption is this is a by-product of data extraction. Consider the following case: if it is decided to extract all traces from January until December of year Y, then all the traces that were ongoing in this time period will be extracted. However, some of them might have started earlier than January and some also ended later than December. It is indeed better to keep those traces in full rather than trim them (which might result in removing the start or end events and harm process discovery) or remove them entirely (in which case we would not have full information about resource usage). Still, in this case the extracted log will contain events occurring (at least) in years Y-1 and Y+1.
For our approach, however, this is critical as this produces periods having not all traces, which, in turn, reduces the overall data quality. Thus, we filter out these periods either based on the event log description or based on the number of cases.
Note that we only remove outliers on the level of time periods, not individual traces.
The resulting dataset and some descriptive statistics are presented in Table <ref>.
§.§ Regression analysis
After data preparation, we can now continue with building statistical models to explain throughput time based on process complexity. We start with two sets of independent variables. One with and one without industry as dummy variable.
For each set, we have the following procedure. First, we build the models automatically. Then, we reduce model size in terms of number of variables in two steps such that we are left with simple yet powerful models.
Note that we only consider linear combination of independent variables in this work.
The remainder of the section describes the procedure in more detail.
Independent variables. We use two sets of independent variables: complexity metrics with and without industry. The reason for including industry is to account for effects on throughput time that are in the nature of the specific process and do not depend on complexity of its execution sequences. On the other side, however, it is interesting whether process performance can be explained purely in terms of theory-backed complexity measures.
Automated model selection. We use automated model selection procedures to select the best regression models based on Akaike Information Criterion (AIC). We use three directions: forward, backward and both. With forward selection, we start with a small set of variables (only industry if it is used or empty set of variables otherwise) and then add new variables one by one. At each step, the model with the lowest AIC is selected for the further step. The procedure stops if adding more variables does not decrease AIC or if all variables are already included. In the backward direction, we start from the model having all variables and remove them, also in a stepwise manner. Using both directions, we start from a simple model and then at each step we can either add or remove a variable, depending on what yields the best AIC.
As a result, we get 3 models for each of the 2 independent variable setups.
Significant variables. While the models produced in the previous step tend to have high explanatory power, they include a large number of independent variables, making them difficult to interpret and to use in practice. However, these models can often be further reduced in terms of the used variables. As a first step in this reduction, we remove all non-significant variables, i.e. variables with p-value larger than 0.001, from the models. We remove all variables that are not highly significant in one step. Interestingly, after this, some other variables in the model become less significant as well, thus we repeat the procedure until all the variables in the model are highly significant. In some cases, this procedure allows us to considerably reduce the size of the models, while keeping the explanatory power mostly unchanged. There are, however, cases, where the reduction leads to considerably lower explanatory power.
Minimal models. Finally, we create minimal models. I.e., we further reduce the size of the models, such that at most one independent variable from each of the five categories (size, variation, distance, entropy, generic) is left. When selecting among variables in one category, the one with the lowest p-value is taken. In case two variables have the same p-value, the one that yields higher R-squared if left in the model is selected.
§ RESULTS
In this section, we present our results. We start with automatically generated models that include industry as dummy variable. We present summaries of full models, significant models as well as minimized models. Then, we show the best of the minimized models in terms of R-squared. Afterwards, we present models that use only theoretical variables following the same structure.
§.§ Full metrics
We started with a model where throughput time depends on industry and automatically added complexity metrics. The best model was achieved with backward selection. It included 23 variables and had R-squared of 0.9566887. Other models were very similar: forward selection also produced a model with 23 variables with very close R-squared of 0.9566887; selection in both directions produced a slightly smaller model – 18 variables – with still similarly high R-squared of 0.9556369.
Restricting to only significant variables allowed to massively reduce the complexity of the models: to 11 for forward and backward selection and to 10 for selection in both directions. The explanatory power of such models, however, only marginally decreased to roughly 0.94 for these models.
While being rather small, these models still contained some redundancy. They contained multiple variables belonging to the same categories of complexity defined in Section <ref>. In most cases, variables belonging to the same category were highly correlated, which is not surprising given they measure the same aspects of complexity. When removing such redundancy by leaving only one variable per category, we arrived at two models: the one resulting from minimizing the forward selection model had 6 variables and R-squared of almost 0.92, and the one resulting from minimizing the backward selection model had only 5 variables because the generic variables were not among the significant variables, and had R-squared of 0.87. Minimizing the model resulting from selecting in both directions resulted in the same model as from forward selection and thus was dropped out.
The summary of the models can be viewed in Table <ref>.
The best of the minimal models was the model resulting from forward selection. It is presented in more detail in Table <ref>. Note that while several possible values for the industry variable are present (finance, healthcare, manufacturing and public), only one of them is present in the formula for each observation. The default value is the remaining industry, education, thus in case of process in education no industry variable should be considered for the estimation.
§.§ Theoretical metrics
While the models presented above explain most of the variance in throughput time, they heavily rely on the industry variable that accounts for industry-specific process characteristics. However, we see that relying only on theoretical variables, i.e. only complexity variables, without any adjustments, still gives valuable results.
First, we can see that some of the automatically generated models are in fact even better than the ones shown in the previous section. Namely, the models achieved with forward selection and selection in both directions have slightly higher R-squared (0.969 vs 0.955) and at the same time have less variables (19 and 15 vs 23 and 18, respectively). The model generated with backward selection is slightly worse, still on par with its counterpart.
Reducing the models to significant variables only gave differing results, some of them are very optimistic. Indeed, the model with forward selection could be reduced to only 9 variables while still having R-squared of 0.94. For the other two models, the results are also very good, yet not that impressive. For instance, the model with backward selection could be reduced from 26 to 18 variables while maintaining its R-squared of over 0.94, it is still very large. The model with selection in both directions could be reduced to 7 variables only, however, at a price of considerably lower R-squared of 0.82.
The models could be reduced even further, with the smallest minimal model containing only 2 variables. However, the explanatory power of such models barely reaches 0.8 in the best case. The models are summarized in Table <ref>. The best minimal model containing only theoretical variables is presented in more detail in Table <ref>. Interestingly, the significant model for forward selection only contained complexity metrics from 2 categories, thus the minimal model has only 2 variables.
§ DISCUSSION
§.§ Implications
During this work, we have made some interesting observations. First of all, we see that
industry alone explains 80% of variance in the dependent variable throughput time.
This means that processes in different industries and in different companies are so different in their nature that knowing where the process is executed allows us to infer a lot about its throughput time without considering its complexity at all.
However, adding the complexity dimension on top of the industry allows us to gain even more insightful information, explaining up to 95.6% of variation in throughput time. One can look at it from different perspectives. On the one hand, when having 80% of the output explained by one categorical variable that does not even need any further computation, one can say that all possible additions to it are only marginal and are not worth considering. On the other hand, being able to explain more than 95% of variance in throughput time is a valuable capability that is worth the effort. In addition, there is a compromise solution with the minimal models. One can still achieve remarkable explanatory power using only a handful of metrics: 5 complexity metrics on top of industry can explain roughly 92% of variance.
Despite having such high explanatory power, models containing industry as an independent variable have received criticism as the observations used to build them only consider one or two processes per industry and thus are not necessarily representative. In the light of this criticism, we also developed models explaining throughput time solely by the complexity of the corresponding processes. The good news is that these theoretical variables successfully managed to compensate the information gained by industry. After all, the industry a process belongs to is not something that directly influences process performance by itself but rather a factor that contributes to how the process is set up, thus it is not surprising that these differences are (at least to some extent) visible in the complexity of the process.
These models with theoretical variables achieve similar results in terms of R-squared while having slightly smaller variable counts. Interestingly, the full models that were generated in the first step have even slightly higher R-squared. This resulted from different starting points: for the models with industry, the lower boundary for model selection was a model already containing the industry because it was considered a baseline. The theoretical models, instead, had a constant as their lower boundary. The interpretation of this is that if we set no boundaries and allow (but do not force) selecting the industry variable, it might be the case that the best models will still not contain it, which even better supports our idea of being able to explain process performance using its complexity only.
It is also interesting to look at these models in more details because they are structurally different from the ones including industry. In the first step, these two kinds of models are similar in terms of both the number of variables and R-squared.
In the second step, where we restrict the models to only containing significant variables, some differences become visible. Theoretical models achieved with forward selection and selection in both directions have slightly smaller R-squared than their counterparts including industry. However, this can be attributed to just having also slightly smaller number of variables. The model achieved with backward selection, however, does not fit the pattern. Its full version had a lot of significant variables, thus it could not be reduced much, which also allowed it to keep most of its R-squared.
The most interesting differences become visible in the last reduction step, where we only choose one variable per category. Theoretical models contain a more homogeneous sets of variables, i.e. more variables from one category, while some categories can be missing entirely. Thus, reducing them to minimal models yields much smaller (2-4 variables) but also much less powerful (R-squared 0.63-0.79) models. Note that models containing industry cannot have such low values at all as industry alone would have R-squared of 0.8.
Up to this point, our goal was to develop the most simple yet powerful explanatory models. However, as we achieved this, we asked ourselves whether we can tweak the models a bit further to gain more explanatory power while not increasing the complexity of the models by much. The first approach we tried was to add interplay between the variables in our models. This, however, was not very fruitful. Adding pairwise interplay terms between all variables in models did not improve R-squared. Adding all possible interplay terms between two but also more variables in the model did increase the R-squared (for instance, we achieved R-squared of 0.96 with only 6 variables for the model including industry), however, such terms are very difficult to explain.
Another approach that we took was clustering the event logs based on their median throughput times and developing separate models for each cluster. With this approach, we could achieve slightly better (or smaller) models in the first step. However, R-squared falls drastically when we try to minimize models.
The last observation that we did, also going in the direction of clustering, is that in the end processes are different and while we can explain a lot of their variance in terms of complexity, there is no one-fits-all solution.
Our results and models should be thus considered as toolbox, and the practitioners should analyze which exact variables makes sense in case of their processes.
§.§ Future Work
We see several promising avenues for future research. First, our quantitative analysis of throughput time can be extended in several directions. On the one hand side, future work can use principal component analysis to cluster the independent variables. This would not only reduce model size, but might lead to interesting insights how different theoretical variables can be empirically grouped together. On the other hand side, our analysis presented in this paper can be complemented by the prediction of throughput time. We deem our models a suitable starting point for such an endeavour.
Second, behavioral studies can investigate how process complexity develops over time. For example, how process complexity is reduced and increases in course of business process standardization initiatives. Such a study could focus on the specific actions taken by management and unpack how they influence process performance and overall complexity. We deem such studies particularly fruitful, if they can complement insights from event logs with detailed interviews with key stakeholders, such as managers and process experts.
Third, more generally, there are plenty of opportunities for behavioral business process research due to the increasing availability of digital trace data <cit.>. Future research can make use of digital trace data from event logs to investigate how business process change over time, contributing to theory on business process change and routine dynamics <cit.>.
§ CONCLUSION
In this paper, we reported on a study in which we empirically examined the link between process complexity and throughput time. Based on 14 event logs and 38 different process complexity metrics, we created various statistical models that explain the throughput time of business processes. Our models are able to explain a large share of the variance in the throughput time, reaching R-squared values of up to 0.96. Our results provide important implications for research on process complexity and process standardization. Practitioners can use our implementation of the different complexity measures to monitor their processes.
splncs04
|
http://arxiv.org/abs/2307.04723v2 | 20230710173354 | Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer | [
"Minxuan He",
"Daohan Wang"
] | hep-ph | [
"hep-ph"
] |
e1e-mail: [email protected]
e2e-mail: [email protected]
Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, PR China
University of Chinese Academy of Sciences, Beijing 100049, PR China
Department of Physics, Konkuk University, Seoul 05029, Republic of Korea
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer
Minxuan Hee1,addr1,addr2
Daohan Wange2,addr3
Received: date / Accepted: date
==========================================================================
Jet tagging is a crucial classification task in high energy physics. Recently the performance of jet tagging has been significantly improved by the application of deep learning techniques. In this work, we propose Particle Dual Attention Transformer for jet tagging, a new transformer architecture which captures both global information and local information simultaneously. Based on the point cloud representation, we introduce the Channel Attention module to the point cloud transformer and incorporates both the pairwise particle interactions and the pairwise jet feature interactions in the attention mechanism. We demonstrate the effectiveness of the P-DAT architecture in classic top tagging and quark-gluon discrimination tasks, achieving competitive performance compared to other benchmark strategies.
§ INTRODUCTION
In high-energy physics experiments, tagging jets, which are collimated sprays of particles produced from high-energy collisions, is a crucial task for discovering new physics beyond the Standard Model. Jet tagging involves distinguishing boosted heavy particle jets from those of QCD initiated quark/gluon jets. Since jets initiated by different particles exhibit different characteristics, two key issues arise: how to represent a jet and how to analyze its representation. Conventionally, jet tagging has been performed using hand-crafted jet substructure variables based on physics motivation. Nevertheless, these methods can often fall short in capturing intricate patterns and correlations present in the raw data.
Over the past decade, deep learning approaches have been extensively adopted to enhance the jet tagging performance<cit.>. Various jet representations have been proposed, including image-based representation using Convolutional Neural Network (CNN)<cit.>, sequence-based representation with Recurrent Neural Network<cit.>, tree-based representation with Recursive Neural Network<cit.> and graph-based representation with Graph Neural Network (GNN)<cit.>. More recently, One representation approach that has gained significant attention is to view the set of constituent particles inside a jet as points in a point cloud. Point clouds are used to represent a set of objects in an unordered manner, described in a defined space, and are commonly utilized in various fields such as self-driving vehicles, robotics, and augmented reality. By adopting this approach, each jet can be interpreted as a particle cloud, which treats a jet as a permutation-invariant set of particles, allowing us to extract meaningful information with deep learning method. Based on the particle cloud representation, several deep learning architectures have been proposed, including Deep Set Framework<cit.>, ABCNet<cit.>, LorentzNet<cit.> and ParticleNet<cit.>. The Deep Set Framework provides a comprehensive explanation of how to parametrize permutation invariant functions for inputs with variable lengths, taking into consideration both infrared and collinear safety. Furthermore, it offers valuable insights into the nature of the learned features by neural networks. ParticleNet adapts the Dynamic Graph CNN architecture<cit.>, while ABCNet takes advantage of attention mechanisms to enhance the local feature extraction. The LorentzNet focused more on incorporating inductive biases derived from physics principles into the architecture design, utilizing an efficient Minkowski dot product attention mechanism. All of these architectures realize substantial performance improvement on top tagging and quark/gluon discrimination benchmarks.
Over the past few years, attention mechanisms have become as a powerful tool for capturing intricate patterns in sequential and spatial data. The Transformer architecture<cit.>, which leverages attention mechanisms, has been highly successful in natural language processing and computer vision tasks such as image recognition. Notably, the Vision Transformer (ViT)<cit.>, initially designed for computer vision tasks, has demonstrated state-of-the-art performance on various image classification benchmarks. However, when dealing with point cloud representation, which inherently lack a specific order, modifications to the original Transformer structure are required to establish a self-attention operation that is invariant to input permutations. To address these issues, a recent approach called Point Cloud Transformer (PCT)<cit.> was proposed, which entails passing input points through a feature extractor to create a high-dimensional representation of particle features. The transformed data is then passed through a self-attention module that introduces attention coefficients for each pair of particles. To evaluate PCT's effectiveness in the context of a high-energy physics task, specifically jet-tagging, PCT was compared with other benchmark implementations using three different public datasets. PCT shares a similar concept with the ABCNet's attention mechanism, employing a self-attention layer to capture the importance of relationships between all particles in the dataset. Another notable approach is the Particle Transformer<cit.>, which incorporates pairwise particle interactions within the attention mechanism and obtains higher tagging performance than a plain Transformer and surpasses the previous state-of-the-art, ParticleNet, by a large margin.
In recent studies, the Dual Attention Vision Transformer (DaViT)<cit.> has exhibited promising results for image classification. The DaViT introduces the dual attention mechanism, comprising spatial window attention and channel group attention, enabling the effective capture of both global and local features in images. These two self-attentions are demonstrated to complement each other. In this paper, we introduce the Channel Attention module to the Point Cloud Transformer and incorporate the pairwise particle interaction and the pairwise jet feature interaction to build a new network structure, called P-DAT. On the one hand, the Channel Attention module can grasp comprehensive spatial interactions and representations by taking into account all spatial locations while computing attention scores between channels. In this way, the P-DAT can combine both the local information and global information of the jet representation for jet tagging. On the other hand, the pairwise interaction features designed from physics principles can modify the dot-product attention weights, thus increasing the expressiveness of the attention mechanism. We evaluate the performance of P-DAT on top tagging and quark/gluon discrimination tasks and compare its performance against other baseline models. Our analysis demonstrates the effectiveness of P-DAT in jet tagging and highlights its potential for future applications in high-energy physics experiments.
This article is organized as follows. In Section <ref>, we introduce the Particle Dual Attention Transformer for jet tagging and describe the key features of the model architecture. We also provide details of the training and validation process. In Section <ref>, we present and discuss the numerical results obtained for top tagging task and quark/gluon discrimination task, respectively. Finally, our conclusions are presented in Section <ref>.
§ MODEL ARCHITECTURE
The focus of this paper is to introduce the Particle Dual Attention Transformer (P-DAT), which serves as a new benchmark approach for jet tagging. Based on the point cloud representation, we regard each constituent particle as a point in the η-ϕ space and the whole jet as a point cloud.
The whole model architecture is presented in Figure.<ref>.
The P-DAT architecture is composed of 5 main building blocks, namely the feature extractor, the particle self attention layers, the channel self attention layers, the class attention layers and the MLP. In order to process a jet of P particles, the P-DAT requires three inputs: the jet dataset, the particle interaction matrix and the jet feature interaction matrix derived from the kinetic information of each particle inside the jet. First of all, the feature extractor is employed to transform the input jet dataset from P× 10 to a higher dimensional representation P× N. As illustrated in Fig.<ref>(left), the feature extractor block contains two parts. The first part incorporates an EdgeConv operation<cit.> followed by 3 two-dimensional convolutional (Conv2D) layers and an average pooling operation across all neighbors of each particle. The EdgeConv operation adopts a k-nearest neighbors approach with k=20 to define a vicinity for each particle inside the jet based on Δ R = √(Δη^2 + Δϕ^2) in the η-ϕ space to extract the local information for each particle. To ensure the permutation invariance among particles, all convolutional layers are implemented with stride and kernel size of 1 and are followed by a batch normalization operation and GeLU activation function. The second part of the feature extractor consists of 3-layer MLP with (128,128,128) nodes each layer with GELU nonlinearity to handle the negative inputs. BN and LN operations are used for normalization between layers. Finally, the output from these two parts are concatenated to obtain the final output. This approach enables the extraction of input particle embeddings through both linear projection and local neighborhood mapping. Furthermore, we introduce a particle interaction matrix and a channel interaction matrix, both of which are designed based on physics principles and incorporated into the self attention module. For the particle interaction matrix, we use a 3-layer 2D convolution with (32,16,8) channels with stride and kernel size of 1 to map the particle interaction matrix to a new embedding P× P × N_h, where N_h is the number of heads in the particle self attention module which will be explained later. As for the channel interaction matrix, an upsampling operation and a 3-layer 2D convolution are applied to map the channel interaction matrix to a higher dimensional representation N× N, with N the input particle embedding dimension.
The second primary building block is the particle self-attention block, which aims to establish the relationship between all particles within the jet using an attention mechanism. As presented in Fig.<ref>, three matrices, which are called query (Q), key (K), and value (V), are built from linear transformations of the original inputs. Attention weights are computed by matrix multiplication between Q and K, representing the matching between them. Similar to the Particle Transformer work<cit.>, we incorporate the particle interaction matrix U_1 as a bias term to enhance the scaled dot-product attention. This incorporation of particle interaction features, designed from physics principles, modifies the dot-product attention weights, thereby enhancing the expressiveness of the attention mechanism. The same U_1 is shared across the two particle attention blocks. After normalization, these attention weights reflect the weighted importance between each pair of particles. The self-attention is then obtained by the weighted elements of V, which result from multiplying the attention weights and the value matrix. It is important to note that P represents the number of particles, and N denotes the total number of features.
The attention weights are computed as:
𝒜(𝐐, 𝐊, 𝐕) = Concat(_1,…,_N_h)
where _i = Attention(𝐐_i, 𝐊_i, 𝐕_i)
= softmax[𝐐_i(𝐊_i)^T/√(C_h)+𝐔_1]𝐕_i
where 𝐐_i=𝐗_i𝐖_i^Q, 𝐊_i=𝐗_i𝐖_i^K, and 𝐕_i=𝐗_i𝐖_i^V are ℝ^P × N_h dimensional visual features with N_h heads, 𝐗_i denotes the i_th head of the input feature and 𝐖_i denotes the projection weights of the i_th head for 𝐐, 𝐊, 𝐕, and N = C_h * N_h.
The particle attention block incorporates a LayerNorm (LN) layer both before and after the multi-head attention module. A two-layer MLP, with LN preceding each linear layer and GELU nonlinearity in between, follows the multi-head attention module. Residual connections are applied after the multi-head attention module and the two-layer MLP. In our study, we set N_h=8 and N=64.
The third main building block is the channel self-attention block, as shown in Fig.<ref>. Unlike the particle self-attention block, this block applies attention mechanisms to the jet features, enabling interactions among the channels. To capture global information in the particle dimension, we set the number of heads to 1, where each transposed token represents global information. Consequently, the channel tokens interact with global information across all channels. This global channel attention mechanism is defined as follows:
𝒜(𝐐_i, 𝐊_i, 𝐕_i) = softmax[𝐐_i^T𝐊_i/√(C)+𝐔_2]𝐕_i^T
where 𝐐_i, 𝐊_i, 𝐕_i ∈ℝ^C × P are channel-wise jet-level queries, keys, and values.
Note that although we transpose the tokens in the channel attention block, the projection layers 𝐖 and the scaling factor 1/√(C) are computed along the channel dimension, rather than the particle dimension. Similar as the particle self-attention block, we incorporate the channel interaction matrix U_2 as a bias term to enhance the scaled dot-product attention. This incorporation of jet channel interaction features, designed based on physics principles, modifies the dot-product attention weights, thereby enhancing the expressiveness of the attention mechanism. The same U_2 matrix is shared across the two channel attention blocks. After normalization, the attention weights indicate the weighted importance of each pair of jet features. The self-attention mechanism produces the weighted elements of V, obtained by multiplying the attention weights and the value matrix. Additionally, the channel attention block includes a LayerNorm (LN) layer before and after the attention module, followed by a two-layer MLP. Each linear layer is preceded by an LN layer, and a GELU nonlinearity is applied between them. Residual connections are added after the channel attention module and the two-layer MLP.
The fourth main building block is the class attention block, which differs from the particle self-attention block by computing attention between a global class token and all particles using the standard Multi-Head Attention (MHA) mechanism.
This class channel attention mechanism is defined as follows:
Q = W_q x_class + b_q,
K = W_q z + b_k,
V = W_q z + b_v,
z = [x_class, x^L]
where z = [x_class, x^L] represents the concatenation of the class token and the particle embedding after the last particle attention block, denoted as x_L. In the first class attention block, the class token is obtained by performing max pooling on the output of the second channel attention block across all particles. In the second class attention block, the class token is obtained by performing average pooling on the output of the second channel attention block across all particles. Furthermore, the class attention block includes a LayerNorm (LN) layer before and after the attention module, followed by a two-layer MLP. Each linear layer is preceded by an LN layer, and a GELU nonlinearity is applied between them. Residual connections are added after the class attention module and the two-layer MLP.
The last main building block is a 3-layer MLP with (448, 64, 2) nodes, as shown in Fig.<ref>(right). First, the outputs of the particle attention blocks and channel attention blocks are concatenated, followed by an average pooling operation across all particles. Subsequently, the outputs of the class attention blocks are concatenated. Finally, these two sets of outputs are concatenated and fed into the MLP. In addition, a batch normalization operation and the GeLU activation function are applied to the second layer, and a dropout rate of 0.5 is applied to the second layer. The last layer employs a softmax operation to produce the final classification scores.
In summary, the P-DAT are composed of one feature extractor, two particle attention blocks, two channel attention blocks, two class attention blocks and one MLP. The feature extractor's output serves as the input for the first particle attention block. Subsequently, we alternate between the particle attention block and the channel attention block to capture both local fine-grained and global features. A dropout rate of 0.1 is applied to all particle attention blocks and channel attention blocks. As demonstrated in Ref.<cit.>, these two blocks complement each other: the channel attention provides a global receptive field in the particle dimension, enabling the extraction of high-level global jet representations by dynamically fusing features across global channel tokens. On the other hand, the particle attention refines local representations by facilitating fine-grained interactions among all particles, thereby aiding in the modeling of global information in the channel attention. After the second channel attention block, two class attention blocks which take the max pooling and average pooling on the output of the second channel attention block as class token are applied to compute the attention between a global class token and all particles using the standard Multi-Head Attention (MHA) mechanism. Finally, the two sets of outputs are concatenated and fed into the MLP and the resulting representation is normalized using a softmax operation.
The model architecture is implemented in the PYTORCH deep learning framework with the CUDA platform. The training and evaluation steps are accelerated using a NVIDIA GeForce RTX 3070 GPU for acceleration. We adopt the binary cross-entropy as the loss function. To optimize the model parameters, we employ the AdamW optimizer<cit.> with an initial learning rate of 0.0004, which is determined based on the gradients calculated on a mini-batch of 64 training examples. In order to address the memory issue caused by huge input data, we implemented a strategy of continuously importing and deleting data during the training process. The network is trained up to 100 epochs, with the learning rate decreasing by a factor of 2 every 10 epochs to a minimal of 10^-6. In addition, we employ the early-stopping technique to prevent over-fitting.
§ JET CLASSIFICATION
The P-DAT architecture is designed to process input data consisting of particles inside the jets. To ensure consistency and facilitate meaningful comparisons, we first sorted the particles inside the jets by transverse momentum and a maximum of 100 particles per jet are employed. The input jet is truncated if the particle number inside the jet is more than 100 and the input jet is zero-padded up to the 100 if fewer than 100 particles are present. This selection of 100 particles is sufficient to cover the vast majority of jets contained within all datasets, ensuring comprehensive coverage. Each jet is characterized by the 4-momentum of its constituent particles. Based on this information, we reconstructed 10 features for each particle. Additionally, for the quark-gluon dataset, we included the Particle Identification (PID) information as the 11-th feature. These features are as follows:
{log E , log |p_x| , log |p_y| , log |p_z| ,
log p_T , p_T/p_TJ , E/E_J , Δη Δϕ , Δ R , PID}.
For the pairwise particle interaction matrix, based on Refs.<cit.>, we calculated the following 5 features for any pair of particles a and b with four-momentum p_a and p_b as the sum of all the particles' four-momentum inside the particle a and particle b, respectively:
Δ R = √((y_a - y_b)^2 + (ϕ_a - ϕ_b)^2),
k_T = min(p_T,a, p_T,b) Δ,
z = min(p_T,a, p_T,b) / (p_T,a + p_T,b),
m^2 = (E_a+E_b)^2 - 𝐩_a+𝐩_b^2,
Δ p_T = p_T,a-p_T,b
where y_i represents the rapidity, ϕ_i denotes the azimuthal angle, p_T,i = (p_x, i^2+p_y, i^2)^1/2 denotes the transverse momentum, and 𝐩_i=(p_x,i, p_y,i, p_z,i) represents the momentum 3-vector and · is the norm, for i=a, b. As mentioned in Ref.<cit.>, we take the logarithm and use (lnΔ, ln k_T, ln z, ln m^2, ln p_T) as the interaction features for each particle pair to avoid the long tail problem.
Apart from the 5 interaction features, we add one more feature for the Quark-Gluon benchmark dataset, defined as δ_i,j, where i and j are the PID of the particles a and b.
For the pairwise jet feature interaction matrix, we selected 10 typical jet variables. Besides, for the quark-gluon dataset, we incorporated the 11th feature based on the Particle Identification (PID) information. The list of all jet variables used in this study is presented below. And the interaction matrix is constructed based on a straightforward yet effective ratio relationship, as illustrated in Table.<ref>.
{ E , p_x , p_y , p_z , p_T , ∑ p_Tf , ∑ E_f ,
Δη, Δϕ , Δ R , PID}.
To provide a clearer explanation of the concept of the jet feature pairwise interaction matrix, we will now present a detailed description. The first 4 variables represent the four-momentum of the input jet. Specifically, p_T denotes the transverse momentum of the input jet, while ∑ p_Tf and ∑ E_f represent the sum of the transverse momentum fractions and the energy fractions of all the constituent particles inside the input jet, respectively. Additionally, Δη, Δϕ and Δ R correspond to the transverse momentum weighted sum of the Δη, Δϕ, Δ R of all the constituent particles inside the input jet, respectively. Here Δη, Δϕ and Δ R refer to the angular distances between each constituent particle and the input jet. Furthermore, PID represents the particle identification associated with the specific particle whose sum of transverse momentum accounts for the largest proportion of the entire jet transverse momentum. The entire jet feature pairwise interaction matrix is defined as a symmetric block matrix with diagonal ones. For convenience, we named {E , p_x , p_y , p_z , p_T , ∑ p_Tf , ∑ E_f} as variable set 1 and {Δη, Δϕ , Δ R} as variable set 2.
We build the pairwise interactions among variable set 1 and variable set 2, respectively. Firstly, we employ a ratio relationship to define the interaction between E and { p_x , p_y , p_z , p_T} and the interaction between p_T and { p_x , p_y}, with no interaction between orthogonal components. Additionally, we establish that the interaction between ∑ E_f and E is 1, while no interactions exist between ∑ E_f and any other variables, except for E and PID. Similarly, we define the interaction between ∑ p_Tf and p_T as 1, with no interactions between ∑ p_Tf and any other variables, except for p_T and PID.
Secondly, we apply a ratio relationship to define the interaction between Δ R and {Δη, Δϕ}, while no interaction is specified between {Δη and Δϕ}. Finally, we determine the interactions between PID and all other variables as the ratio of the sum of the corresponding variables of the particles associated with the PID to the variable of the jet.
§.§ Quark/Gluon Discrimination
The Quark-Gluon benchmark dataset<cit.> was generated with Pythia8 without detector simulation. It comprises of quark-initiated samples qq→Z→νν+(u,d,s) as signal and gluon-initiated data qq→Z→νν+g as background.
Jet clustering was performed using the anti-kT algorithm with R = 0.4. Only jets with transverse momentum p_T ∈ [500, 550] GeV and rapidity |y| < 1.7 were selected for further analysis. Each particle within the dataset comprises not only the four-momentum, but also the particle identification information, which classifies the particle type as electron, muon, charged hadron, neutral hadron, or photon. The official dataset compromises of 1.6M training events, 200k validation events and 200k test events, respectively. In this paper, we focused on the leading 100 constituents within each jet, utilizing their four-momenta and particle identification information for training purposes. For jets with fewer than 100 constituents, zero-padding was applied. For each particle, a set of 11 input features was used, based solely on the four-momenta and identification information of the particles clustered within the jet. The accuracy, area under the curve (AUC), and background rejection results are presented in Table <ref>.
§.§ Top Tagging
The benchmark dataset<cit.> used for top tagging comprises hadronic tops as the signal and QCD di-jets as the background. Pythia8<cit.> was employed for event generation, while Delphes<cit.> was utilized for detector simulation. All the particle-flow constituents were clustered into jets using the anti-kT algorithm<cit.> with a radius parameter of R = 0.8. Only jets with transverse momentum p_T ∈ [550, 650] GeV and rapidity |y| < 2 were included in the analysis. The official dataset contains 1.2M training events, 400k validation events and 400k test events, respectively. Only the energy-momentum 4-vectors for each particles inside the jets are provided. In this paper, the leading 100 constituent four-momenta of each jet were utilized for training purposes. For jets with fewer than 100 constituents, zero-padding was applied.
For each particle, a set of 10 input features based solely on the four-momenta of the particles clustered inside the jet was utilized. The accuracy, area under the curve (AUC), and background rejection results can be found in Table <ref>.
§ CONCLUSION
This study applies the Particle Dual Attention Transformer as an innovative approach for jet tagging. Specifically, the P-DAT architecture incorporates the Channel Attention module to the Point Cloud Transformer, allowing for capturing the jet-level global information and particle-level local information simultaneously.
In addition, we introduces the particle pairwise interactions and the jet feature pairwise interactions. This technique not only enables the extraction of semantic affinities among the particles through a self-attention mechanism and the semantic affinities among the jet features through a channel-attention mechanism, but also augments the self-attention and channel-attention by combining the physics-motivated pairwise interactions with the machined learned dot-production attention. We evaluate the P-DAT architecture on the classic top tagging task and the quark-gluon discrimination task and achieve competitive results compared to other benchmark strategies. Moreover, we solved the memory usage problem by importing and deleting data during training. However, the computational time problem regarding of using the full pairwise interaction matrix is still unresolved which could be an interesting direction for future research.
This work is funded by the National Research Foundation of Korea, Grant No. NRF-2022R1A2C1007583.
|
http://arxiv.org/abs/2307.06117v1 | 20230708094335 | A qubit regularization of asymptotic freedom at the BKT transition without fine-tuning | [
"Sandip Maiti",
"Debasish Banerjee",
"Shailesh Chandrasekharan",
"Marina K. Marinkovic"
] | hep-lat | [
"hep-lat",
"cond-mat.str-el",
"hep-th",
"quant-ph"
] |
[email protected]
Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, India
Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India
[email protected]
Saha Institute of Nuclear Physics, HBNI, 1/AF Bidhannagar, Kolkata 700064, India
Homi Bhabha National Institute, Training School Complex, Anushaktinagar, Mumbai 400094, India
[email protected]
Department of Physics, Box 90305, Duke University, Durham, North Carolina 27708, USA
[email protected]
Institut für Theoretische Physik, Wolfgang-Pauli-Straße 27, ETH Zürich, 8093 Zürich, Switzerland
We propose a two-dimensional hard core loop-gas model as a way to regularize the asymptotically free
massive continuum quantum field theory that emerges at the BKT transition. Without fine-tuning, our
model can reproduce the universal step-scaling function of the classical lattice XY model in the massive
phase as we approach the phase transition. This is achieved by lowering the fugacity of Fock-vacuum
sites in the loop-gas configuration space to zero in the thermodynamic limit. Some of the universal
quantities at the BKT transition show smaller finite size effects in our model as compared to the
traditional XY model. Our model is a prime example of qubit regularization of an asymptotically free
massive quantum field theory in Euclidean space-time and helps understand how asymptotic freedom can
arise as a relevant perturbation at a decoupled fixed point without fine-tuning.
A qubit regularization of asymptotic freedom at the BKT transition without fine-tuning
Marina K. Marinkovic 0000-0002-9883-7866
August 12, 2023
======================================================================================
The success of the Standard Model of particle physics shows that at a fundamental level, nature is well described by a continuum QFT. Understanding QFT non-perturbatively continues to be an exciting area of research, since defining them in a mathematically unambiguous way can be challenging. Most definitions require some form of short-distance (UV) regularization, which ultimately needs to be removed. Wilson has argued that continuum QFT arise near fixed points of renormalization group flows <cit.>. This has led to the concept of universality, which says that different regularization schemes can lead to the same QFT. Following Wilson, traditional continuum quantum field theories are usually regulated non-perturbatively on a space-time lattice by replacing the continuum quantum fields by lattice quantum fields and constructing a lattice Hamiltonian with a quantum critical point where the long distance lattice physics can be argued to be the desired continuum QFT. However, universality suggests that there is a lot of freedom in choosing the microscopic lattice model to study a particular QFT of interest.
Motivated by this freedom and to study continuum quantum field theories in real time using a quantum computer, the idea of qubit regularization has gained popularity recently <cit.>. Unlike traditional lattice regularization, qubit regularization explores lattice models with a strictly finite local Hilbert space to reproduce the continuum QFT of interest. Euclidean qubit regularization can be viewed as constructing a Euclidean lattice field theory with a discrete and finite local configuration space, that reproduces the continuum Euclidean QFT of interest at a critical point. If the target continuum theory is relativistic, it would be natural to explore Euclidean qubit regularized models that are also symmetric under space-time rotations. However, this is not necessary, since such symmetries can emerge at the appropriate critical point. Lattice models with a finite dimensional Hilbert space that can reproduce continuum QFT of interest were introduced several years ago through the D-theory formalism <cit.> and has been proposed for quantum simulations <cit.>. In contrast to qubit regularization, the D-theory approach allows the local Hilbert space to grow through an additional dimension when necessary. In this sense, qubit regularization can be viewed as the D-theory approach for those QFT where a strictly finite Hilbert space is sufficient to reproduce the desired QFT.
Examples of using qubit regularization to reproduce continuum QFT in the IR are well known. Quantum spin models with a finite local Hilbert space are known to reproduce the physics of classical spin models with an infinite local Hilbert space near Wilson-Fisher fixed points <cit.>. They can also reproduce QFT with topological terms like the Wess-Zumino-Witten theories <cit.>. Gauge fields have been proposed to emerge dynamically at some quantum critical points of simple quantum spin systems <cit.>. From the perspective of Euclidean qubit regularization, recently it was shown that Wilson-Fisher fixed points with O(N) symmetries can be recovered using simple qubit regularized space-time loop models with N+1 degrees of freedom per lattice site <cit.>. Similar loop models have also been shown to produce other interesting critical behavior <cit.>. Loop models are extensions of dimer models, which are also known to describe interesting critical phenomena in the IR <cit.>. All this evidence shows that Euclidean qubit regularization is a natural way to recover continuum QFT that emerge via IR fixed points of lattice models.
A non-trivial question is whether we can also recover the physics of ultraviolet fixed points (UV-FPs), using qubit regularization. In particular, can we recover massive continuum QFT which are free in the UV but contain a marginally relevant coupling? Examples of such AF theories include two-dimensional spin models and four dimensional non-Abelian gauge theories. In the D-theory approach, there is strong evidence that the physics at the UV scale can indeed be recovered exponentially quickly as one increases the extent of the additional dimension <cit.>. Can the Gaussian nature of the UV theory emerge from just a few discrete and finite local lattice degrees of freedom, while the same theory then goes on to reproduce the massive physics in the IR? For this we will need a special type of quantum criticality where three length scales, as sketched in <ref>, emerge. There is a short lattice length scale a, where the non-universal physics depends on the details of the qubit regularization, followed by an intermediate length scale ≫ a, where the continuum UV physics sets in and the required Gaussian theory emerges. Finally, at long length scales ≫, the non-perturbative massive continuum quantum field theory emerges due to the presence of a marginally relevant coupling in the UV theory. The qubit regularized theory thus reproduces the universal continuum QFT in the whole region ℓ_ UV to ℓ_ IR. The special quantum critical point must be such that ℓ_ UV/a →∞.
Recently, a quantum critical point with these features was discovered in an attempt to find a qubit regularization of the asymptotically free massive non-linear O(3) sigma model in two space-time dimensions in the Hamiltonian formulation <cit.>. Using finite size scaling techniques, it was shown that the qubit regularized model recovers all the three scales. In this paper, we report the discovery of yet another example of a quantum critical point with similar features. In the current case, it is a Euclidean qubit regularization of the asymptotically free massive continuum quantum field theory that arises as one approaches the BKT transition from the massive phase <cit.>. In both these examples, the qubit regularized model is constructed using two decoupled theories and the AF-QFT emerges as a relevant perturbation at a decoupled quantum critical point. The coupling between the theories plays the role of the perturbation that creates the three scales, as illustrated in the RG flow shown in <ref>. An interesting feature of this discovery is that there is no need for fine-tuning to observe some of the universal features of the BKT transition that have been unattainable in practice with other traditional regularizations <cit.>.
The BKT transition is one of the most widely studied classical phase transitions, since it plays an important role in understanding the finite temperature superfluid phase transition of two-dimensional systems <cit.>. One simple lattice model that captures the universal behavior of the physics close to the phase transition is the
classical two-dimensional XY model on a square lattice given by the classical action,
S = -β∑_⟨ ij⟩cos(θ_i-θ_j),
where the lattice field 0≤θ_i < 2π is an angle associated to every space-time lattice site i and ⟨ ij⟩ refers to the nearest neighbor bonds with sites i and j. The lattice field naturally lives in an infinite dimensional Hilbert space of the corresponding one dimensional quantum model. Using high precision Monte Carlo calculations, the BKT transition has been determined to occur at the fine-tuned coupling of β_c ≈ 1.1199(1) <cit.>. The Villain model is another lattice model which is friendlier for analytic calculations and has been used to uncover the role of topological defects in driving the phase transition <cit.>. More recently, topological lattice actions which seem to suppress vortices and anti-vortices but still drive the BKT transition have also been explored <cit.>.
As one approaches the BKT transition from the massive phase, the long distance physics of the <ref> is known to be captured by the sine-Gordon model whose Euclidean action is given by<cit.>,
S = ∫ dx dt [ 1/2t (∂_μθ_1)^2 + t/8π^2 (∂_μθ_2)^2 -
A t/4π^2cosθ_2 ]
where t ≥π/2. The field θ_1(x,t) captures the spin-wave physics while the vortex dynamics is captured by the field θ_2(x,t). The BKT transition in this field theory language occurs at t = π/2 where the cosθ_2 term becomes marginal as one approaches the critical point and the physics is governed by a free Gaussian theory. In this sense, the long distance physics of the lattice XY model, as β is tuned to β_c from smaller values, is an asymptotically free massive Euclidean continuum QFT.
Qubit regularizations of the classical XY-model have been explored recently using various quantum spin formulations <cit.>. Lattice models based on the spin-1 Hilbert space are known to contain rich phase diagrams <cit.>, and quantum field theories that arise at some of the critical points can be different from those that arise at the BKT transition. Also, the presence of a marginally relevant operator at the BKT transition can make the analysis difficult, especially if the location of the critical point is not known. In these cases, it becomes a fitting parameter in the analysis, increasing the difficulty. Since in our model the location of the critical point is known, our model can be analyzed more easily.
The model we consider in this work is a variant of the qubit regularized XY model introduced in Euclidean space recently <cit.>. The model can be viewed as a certain limiting case of the classical lattice XY-model <ref> written in the world-line representation <cit.>, where the bosons are assumed to be hard-core. The partition function of our model is a sum of weights associated with configurations of oriented self-avoiding loops on a square lattice with Fock-vacuum sites. An illustration of the loop configuration is shown as the left figure in <ref>. The main difference between our model in this work and the one introduced previously is that closed loops on a single bond are now allowed. Such loops seemed unnatural in the Hamiltonian framework that motivated the previous study, but seem to have profoundly different features in two dimensions <cit.>. It is also possible to view the loop configurations of our model as a configuration of closed packed oriented dimers on two layers of square lattices. The dimer configuration corresponding to the loop configuration is shown on the right in <ref>. The dimer picture of the partition function arises as a limiting case of a model involving two flavors of staggered fermions, introduced to study the physics of symmetric mass generation <cit.>. In this view point the inter-layer dimers (or Fock vacuum sites) resemble t'Hooft vertices (or instantons) in the fermionic theory. Using this connection, the partition function of our model can be compactly written as the Grassmann integral
Z = ∫ [d d] [d d] exp(λ ∑_i _i _i _i_i)
× exp( ∑_⟨ ij⟩( _i _i _j_j + _i _i _j _j))
where on each site i of the square lattice we define four Grassmann variables _i, _i, _i and _i. We consider periodic lattices with L sites in each direction. Using the fermion bag approach <cit.>, we can integrate the Grassmann variables and write the partition function as a sum over dimer configurations whose weight is given by λ^N_I where N_I is the number of instantons (or Fock-vacuum sites). Thus, λ plays the role of the fugacity of Fock-vacuum sites. It is easy to verify that the action of our model is invariant under _j _j → e^iσ_jθ_j _j and _j _j → e^-iσ_jθ_j _j where σ_j = ± tracks the parity of the site j. This U(1) symmetry is connected to the BKT transition and in order to track it, the dimers are given an orientation as explained in <ref>.
Using worm algorithms (see <cit.>) we study our model for various values of L and λ.
At λ = 0, one gets two decoupled layers of closed packed dimer models, which is known to be critical <cit.>. The effect of λ≠ 0 was studied several years ago, and it was recognized that there is a massive phase for sufficiently large values of λ <cit.>. However, the scaling of quantities as λ→ 0 was not carefully explored. Recently, the subject was reconsidered, and a crossover phenomenon was observed for small λ as a function of L. An understanding of this crossover was largely left unresolved as a puzzle <cit.>. In this paper, we demonstrate that the observed crossover phenomena captures the asymptotic freedom of <ref>. We do this by comparing the universal behavior of <ref> with the traditional XY model <ref> near the massive phase of the BKT transition <cit.>.
To compare universal behaviors of <ref> and <ref> we compute the second moment finite size correlation length ξ(L) defined as ξ(L) = √((χ/F)-1)/(2sin(π/L)) (see <cit.>), where χ = G(0) and F = G(2π/L) are defined through the two point correlation function
G(p) = ∑_j e^i p x⟨ O^+_(x,t) O^-_(0,0)⟩.
In the above relation j is the space-time lattice site with coordinates (x,t) and O^+_j, O^-_j are appropriate lattice fields in the two models. In the XY model O^+_j = e^iθ_j, O^-_j = e^-iθ_j, while in the dimer model O^+_j = O^-_j = _j _j. We demonstrate that the step-scaling function (SSF) (i.e., the dependence of ξ(2L)/ξ(L) on ξ(L)/L) of the two lattice models show excellent agreement with each other in the scaling regime ℓ_UV≫ a, in <ref>.
Another interesting universal result at the BKT transition is the value of the helicity modulus, which can be defined using the relation, Υ = ⟨ Q_w^2⟩ where Q_w is the spatial winding number of bosonic worldlines. In the XY model <ref>, it is usually defined using a susceptibility of a twist parameter in the boundary conditions <cit.>. In our model, we can easily compute the winding charge Q_w in each loop configuration illustrated in <ref>. The universal result in the massive phase as we approach the BKT transition is that Υ≈ 2/π in the UV up to exponentially small corrections <cit.>, although in the IR Υ = 0. While it is difficult to obtain the UV value in lattice calculations using the traditional model <ref>, in our model, we can see it emerge nicely at λ=0.01. We demonstrate this in <ref>. Again, as expected, the value of Υ when λ=0 is very different, since it is a theory of free bosons but at a different coupling. Using the different value of the coupling gives Υ ≈ 0.606 <cit.>. Our results provide strong evidence that the AF-QFT at the BKT transition emerges from our dimer model when we take the limit L→∞ followed by λ→ 0. The opposite limit leads to the critical theory of the decoupled dimer model.
Acknowledgments: We are grateful to J. Pinto Barros, S. Bhattacharjee, T. Bhattacharya, H. Liu, A. Sen, H. Singh and U.-J. Wiese for inspiring discussions. We acknowledge use of the computing clusters at SINP, and the access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland under the ETHZ’s share with the project IDs go24 and eth8. Support from the Google Research Scholar Award in Quantum Computing and the Quantum Center at ETH Zurich is gratefully acknowledged. S.C's contribution to this work is based on work supported by the U.S. Department of Energy, Office of Science — High Energy Physics Contract KA2401032 (Triad National Security, LLC Contract Grant No. 89233218CNA000001) to Los Alamos National Laboratory. S.C is supported by a Duke subcontract based on this grant. S.C's work is also supported in part by the U.S. Department of Energy, Office of Science, Nuclear Physics program under Award No. DE-FG02-05ER41368.
Supplementary Material
§ UNIVERSAL VALUES OF Υ FOR Λ = 0 AND Λ≠ 0
In this section we explain the two different values of the helicity modulus Υ for our model when λ=0 and λ→ 0. When λ=0 our model maps into two identical but decoupled layers of closed packed classical dimer models. As has already been explained in the literature (see for example <cit.>), each layer can be mapped to the theory of a free compact scalar field with the action
S = 1/2 t∫ d^2 x (∂_μθ(x))^2.
with t=4π. One can compute Υ starting with <ref>, by noting that the scalar fields have winding number configurations labeled by n_x:
θ(x) = 2 π x n_x/L_x + φ(x),
where φ(x) is a smooth fluctuation that is independent of winding number n_x. The value of the action in each winding sector in a finite space-time volume is then given by
S(n_x) = 2π^2 n_x^2/tL_y/L_x + S_0,
where S_0 is the action from the usual fluctuations in the zero winding number sector. Using L_x = L_y, we can compute Υ using its connection to the average of the square of the winding numbers,
Υ = ⟨ (Q_x)^2 |=⟩∑_n_x n_x^2 · e^- 2 π^2 n_x^2/t/∑_n_x e^-2π^2 n_x^2/t
Numerically evaluating this expression for t=4π we obtain Υ = 0.303426... for a each layer of our dimer model. Our value of 0.606852... is due to the presence of two decoupled layers.
In contrast, in the limit λ→ 0, we need to consider the physics at the BKT transition and so we begin with the action
S = ∫ d^2x [ 1/2t̃ (∂_μθ_1)^2 + t̃/8π^2 (∂_μθ_2)^2 -
A t̃/4π^2cosθ_2 ]
and focus at t̃=π/2. At this coupling the last term is irrelevant and Υ gets dominant contribution from the θ_2 field. In this we can still use <ref> but need to substitute t = 4π^2/t̃ = 8π. Substituting we get Υ = 0.636508... which is approximately 2/π.
§ WORM ALGORITHM
In this section, we discuss the worm algorithm we use to simulate the model with the partition function,
Z = ∫ [d d] [d d] exp(λ ∑_i _i _i _i_i)
× exp( ∑_⟨ ij⟩( _i _i _j_j + _i _i _j _j))
as introduced in the main paper. These algorithms are well known <cit.>, and can be divided into three parts: Begin, Move, and End.
* Begin: pick a site at random and denote it as tail, and there are the following
two possibilities: (A) either it has a bond connected to it on the other layer (which we call
an instanton, or an interlayer dimer), or, (B) it has a bond connected to it on the same layer
(which we call a dimer).
* For the case (A), propose to remove the instanton, and put the worm head on the
same site at the different layer, with a probability 1/λ. If accepted, then begin the
worm update, otherwise go to (1).
* For the case (B), pick the other site to which the dimer is connected as the head,
and begin the worm update.
* Move: Propose to move the worm head to one of the (2D+1) neighbor sites of head
with an equal probability, which can either be on the same layer (2D choices), or on the different
layer (one choice). Denote the proposed new site as site0, and the following possibilities can
occur, provided that site0 is not the tail:
* site0 is on the same layer, and has an instanton connected to it. Propose to
remove the instanton with a probability 1/λ. If accepted, place the head
at site0, but on the different layer.
* site0 is on the same layer, and has a dimer connected to it (joining site0
and y). Move the head to the site y with a probability 1, and simultaneously insert
a dimer between head and site0.
* site0 is on the different layer, then propose if an instanton can be created. If
yes, then move the position of the head to y in the other layer, where y is the other
end of the dimer connecting site0 and y.
* End: If at any stage in the algorithm, the site0 is the tail, then propose to end
the worm update. If the site0 = tail is on the same layer, then end the update by putting
a dimer between the head and tail with a probability 1. If, on the other hand, they are
on different layers, the worm update ends with a probability λ, leading to the addition of an
extra instanton.
§ EXACT VS MONTE CARLO RESULTS ON A 2 × 2 LATTICE
In this work, we compute two independent fermion bilinear susceptibilities defined as
χ_1 = 1/2V∑_i,j
i≠ j⟨ψ̅_i ψ_i ψ̅_j ψ_j ⟩,
χ_2 = 1/2V∑_i,j
i≠ j⟨ψ̅_i ψ_i χ̅_j χ_j ⟩,
where χ_1 is an observable that can be defined even on a single layer, while χ_2 is involves both the layers. When the coupling λ = 0, the two layers are completely decoupled from each other and we get χ_2 = 0. Another quantity we compute is the average density of Fock vacuum sites or inter-layer dimers (which we also view as instantons), defined as
ρ = 1/V∑_i ⟨ψ̅_i ψ_i χ̅_i χ_i ⟩,
where the expectation value is defined as
⟨ O⟩ = 1/Z∫ [𝒟ψ̅𝒟ψ]
[𝒟χ̅𝒟χ] O
e^-S[ψ̅,ψ, χ̅,χ].
Since every site is populated by either a Fock-vacuum site or an intra-layer dimer, the average intra-layer dimer density is not an independent observable. We can always compute it from the Fock vacuum sites (instanton) density ρ.
In order to test out algorithm, we focus on exact results on a 2× 2 lattice. The partition function in this simple case is given by
Z = 64 + 16 λ^2 + λ^4,
while the instanton density and the two independent susceptibilities are given by
ρ = 1/4Z (32 λ^2 + 4 λ^4),
χ_1 = 1/2Z (32 + 4 λ^2),
χ_2 = 1/2Z (8 λ).
Note that ρ is zero when λ = 0 and approaches one for large couplings. Also, as expected χ_2=0 when λ=0. In <ref> we compare results for three different observables, instanton density (ρ), fermion bilinear susceptibility (χ_1), and helicity modulus (Υ) on a 2 × 2 lattice obtained from an exact calculation against the results obtained using the worm algorithm.
Interestingly, when λ≠ 0 we find that both χ_1 and χ_2 become similar as L increases. The difference also becomes smaller as λ increases. We show this behavior in the <ref>.
Due to this similarity we only focus on χ_1 in our work.
§ PLOTS OF Ρ AND Χ_1
We have computed the fermionic XY model at various values of λ on square lattices up to L = 4000 using the
worm algorithm described above. For our simulations, after allowing for appropriate thermalization, we have recorded between 8 × 10^3 and 48 × 10^3 measurements, each averaged over 2000 worm updates. A comparable number of measurements were also made for the bosonic model.
In <ref>, we plot ρ for various lattice sizes at different values of λ on the left. We note that ρ increases monotonically and approaches the thermodynamic limit by L=160 which is shown on the right.
In <ref>, we plot χ_1 as a function of system size, L for different values of λ. When λ is small, we find that our data is consistent with the behavior χ_1 ∼ AL^2-η expected in a critical phase. However, for larger values of λ, the susceptibility begins to saturate as χ_1 ∼ A which means η≈ 2. For λ=0, since the model describes two decoupled layers of closed packed dimer models we expect η=0.5 <cit.>. However, when λ is small, since we expect our model to describe the physics at the BKT transition, we expect η∼ 0.25. This is consistent with our findings.
The values of constant A and η for various values of λ obtained from a fit are given in <ref>.
§ STEP SCALING FUNCTION
In order to argue that the traditional XY model at the BKT transition and the two layer interacting dimer model are equivalent we compute the step scaling function (SSF) in both of them. We refer to the traditional XY model defined through the lattice action
S = -β∑_⟨ ij ⟩cos(θ_i-θ_j),
as the bosonic XY model and dimer model defined in <ref> as the fermionic XY model. In order to compute the step-scaling function we first compute the second moment correlation length defined in a finite box of size L using the expression
ξ(L) = 1/2sin(π/L)√(χ/F - 1),
where
χ = ∑_i ⟨ O^+_i O^-_0⟩,
F = ∑_i ⟨ O^+_i O^-_0⟩cos(2π x /L),
where i=(x,t) is the space-time lattice site and O^+_i, O^-_i are lattice fields in the two lattice models. In the bosonic XY model, O^+_i = e^iθ_i and O^-_i = e^-iθ_i, while in the fermionic model O^+_i = O^-_i = ψ_iψ_i.
The SSF for the bosonic XY model is computed in the massive phase close to the critical point, for β < β_c = 1.1199 <cit.>. To study the step scaling function, we prepare several pairs of data at (β, L) and (β,2L), and compute both ξ(2L)/ξ(L) and ξ(L)/L using the data presented in <ref>. We follow certain criteria as explained in <cit.>, to ensure the minimization of finite volume and finite lattice spacing errors. In particular, we only choose lattices of sizes L ≥ L_min, where L_min = 80 for couplings β≥0.92. Since the correlation length increases for β close to the β_c, larger lattice sizes are essential. The similar criteria for choosing the lattices sizes and couplings in the fermionic model is L ≥ L_min, where L_min = 80 for 0.62≤λ≤0.9, and L_min = 640 for λ < 0.6.
In order to compute the expectation value and error of ξ(L)/L, we use the jackknife analysis. We report the results here for the analysis with 40 jackknife blocks. The effect of variation of the jackknife blocks did not change the errors significantly, and were consistent with the errors obtained using a bootstrap analysis. In <ref>, we show an example of the variation of the average and error of ξ(L)/L at λ=0.353 and L=320 for the fermionic model using both the jackknife and the bootstrap analysis as a function of block size. For both methods, we use the same number of block sizes, but in order to show the distinction between them, we have displaced the data on the x-axis by multiplying nBlock by a factor of 1.1 for the bootstrap analysis.
In order to compare the SSF between the bosonic and the fermionic models we tried to parameterize the function in two different ways. In the first approach, we follow the idea discussed in <cit.> where it was proposed that
Σ(x) = 1 + a_1 e^-1/x + a_2 e^-2/x + a_3 e^-3/x + a_4 e^-4/x,
where x = ξ(L)/L and Σ = ξ(2L)/ξ(L). The behavior of this function is such that, as x → 0, the function Σ(x) approaches 1. While this function is strictly valid only for small x we find that this form fits our data well. The fit results are given in <ref>. We see that while we get good fits by including all four fit parameters, we can also fix a_2=0 and still get a good fits.
In the second approach, to parameterize our SSF we used a cubical spline to interpolate the data. In <ref>, we provide a tabulation of the spline function that helps parameterize the SSF for both the bosonic and the fermionic models. The errors are obtained using a jackknife analysis.
In order to show how these two different parameterizations help capture our data we show the corresponding curves for the bosonic model in <ref> and for the fermionic model in <ref>. We believe that a combined parameterization would best capture the true function. Hence, we use <ref> for ξ(L)/L ≤ 0.572 and the cubical spline interpolation for ξ(L)/L ≥ 0.572. This combined form in the bosonic model is shown in <ref>, along with the bosonic model data. The dark line of this plot is used in the main paper to compare with the fermionic model.
§ INFINITE VOLUME CORRELATION LENGTH
We can compute the infinite volume correlation length ξ_∞ using the SSF. Here we try to understand how ξ_∞ depends on λ in the fermionic XY model. In order to reliably estimate the errors in ξ_∞ we again use the jackknife analysis. We start with 40 jackknife blocks, where each block contains a pair (ξ(L)/L, ξ(2L)/ξ(L)) for different coupling values (0.01 ≤λ≤ 0.8). We obtain 40 different cubical splines using each jackknife block. We then start with the initial ξ(L)/L at L=640 in each block and evaluate ξ(2^n L) using the spline function for arbitrary values of n, until the correlation length ξ(2^n L) becomes insensitive to L. Finally, the jackknife mean and error is then computed from the 40 values. These results for ξ_∞ and their errors are quoted in <ref>.
Since the correlation lengths increase exponentially as λ becomes small, we were able to extract the infinite volume correlation length only in the range 0.3≤λ≤0.8. Below λ < 0.3, our extrapolation methods fail.
Using the data in <ref> we study the λ dependence of ξ_∞. For the bosonic XY model, it is well known that as one approaches the BKT phase transition, the leading divergence of the infinite volume correlation length is captured by
ξ = C exp( b/√(β_c - β)),
where β_c is the critical coupling, and b and C are non-universal constants. For the fermionic XY model since the partition function is an even function of λ we expect ξ_∞ to be a function of λ^2. Since the BKT critical point appears when λ→ 0, we conjecture that
ξ^(1)_∞ = a_1 exp( b_1/√(λ^2)).
We test this conjecture numerically by fitting the data in <ref> to it. We also compare this to other fit forms including ξ^(2)_∞ = a_2 exp(b_2/(λ^2)^1/4) and ξ^(3)_∞= a_3 exp(b_3/√(λ^2) + c_3 log(λ^2)/2). The results are shown in <ref>. We observe that <ref> is clearly quite good if we expect the constants a and b to be numbers which are not unnatural. We cannot rule out the presence of a power law correction to the expected form.
In <ref>, we show the data in <ref> and the various fits. The first form is the expected behaviour from <ref>. The second form explores a possible dependence on square-root of λ which is clearly unnatural. Finally the third form allows for a logarithmic correction in the exponential (which is equivalent to including a 1/λ dependence outside the exponential). We note that in this extended form the data in the larger range of 0.3 ≤λ≤ 0.8 can be fit.
§ MONTE CARLO RESULTS
We tabulate all of our Monte Carlo data in <ref> for both the bosonic XY and the fermionic XY models, for various values of L and couplings. The errors in these primary quantities have been obtained with 20 jackknife blocks.
|
http://arxiv.org/abs/2307.03997v1 | 20230708154148 | Efficient Model-Free Exploration in Low-Rank MDPs | [
"Zakaria Mhammedi",
"Adam Block",
"Dylan J. Foster",
"Alexander Rakhlin"
] | cs.LG | [
"cs.LG",
"math.OC"
] |
Lightweight Improved Residual Network for Efficient Inverse Tone Mapping
Liqi Xue, Tianyi Xu, Yongbao Song, Yan Liu, Lei Zhang, Xiantong Zhen, and Jun Xu
This work was sponsored by the National Natural Science Foundation of China (No. 62002176, 62176068, and 12101334), CAAI-Huawei MindSpore Open Fund, the Natural Science Foundation of Tianjin (No. 21JCQNJC00030), and the Fundamental Research Funds for the Central Universities. Corresponding author: Xiantong Zhen ([email protected]) and Jun Xu ([email protected]).
Liqi Xue, Tianyi Xu, Yan Liu, and Jun Xu are with the School of Statistics and Data Science, Nankai University, Tianjin 300071, China.
Yongbao Song is with the School of Mathematical Science, Nankai University, Tianijn 300071, China.
Lei Zhang and Xiantong Zhen are with the Computer Science College, Guangdong University of Petrochemical Technology, Maoming 525000, China.
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A major challenge in reinforcement learning is to develop practical, sample-efficient algorithms for exploration in high-dimensional domains where generalization and function approximation is required. Low-Rank Markov Decision Processes—where transition probabilities admit a low-rank factorization based on an unknown feature embedding—offer a simple, yet expressive framework for RL with function approximation, but existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions such as latent variable structure, access to model-based function approximation, or reachability. In this work, we propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs that is both computationally efficient and model-free, allowing for general function approximation and requiring no additional structural assumptions. Our algorithm, , uses the notion of a generalized optimal design for the feature embedding as an efficiently computable basis for exploration, performing efficient optimal design computation by interleaving representation learning and policy optimization. Our analysis—which is appealingly simple and modular—carefully combines several techniques, including a new reduction from optimal design computation to policy optimization based on the Frank-Wolfe method, and an improved analysis of a certain minimax representation learning objective found in prior work.
§ INTRODUCTION
In reinforcement learning and control, many of the most promising
application domains require the agent to navigate complex,
high-dimensional state and action spaces, where generalization and function approximation
is necessary. The last decade has
witnessed impressive empirical success in domains where
data are abundant <cit.>,
but when data are limited, ensuring efficient exploration in
large domains is a major research question. For
statistical efficiency, the foundations have recently begun to
take shape, with a line
of research providing structural conditions that facilitate
sample-efficient exploration, as well as fundamental limits
<cit.>. Computational
efficiency, however, remains a major challenge: outside of simple
settings <cit.>, existing algorithms
with provable sample complexity guarantees are computationally
inefficient, and typically require solving intractable non-convex
optimization problems
<cit.>. The
prospect of
developing practical algorithms for exploration in
high-dimensional state spaces that are both computationally and
statistically efficient raises three fundamental questions:
* What are the right computational primitives for exploration?
That is, how can one efficiently represent and compute exploratory policies that
allow the learner
to explore the state
space and gather useful data?
* How should one leverage function approximation—for
example, via
representation learning—to
discover such primitives in a computationally and statistically
efficient fashion?
* Given answers to the first two questions, how can one efficiently interleave function approximation and exploration to provide provably efficient algorithms?
In this paper, we investigate these questions through the
model <cit.>. In a , the state space is large
and potentially continuous, but the transition probabilities admit an
(unknown) low-rank factorization. Concretely, for a finite-horizon
with horizon H, the transition densities for layer
h∈H satisfy
T_h(x_h+1|x_h,a_h) = [h+1](x_h+1)^(x_h,a_h),
where (·,·)∈^d and
(·)∈^d are state-action and next-state
embeddings. The low-rank structure in (<ref>)
facilitates tractable exploration: if the embedding is known
to the learner, one can efficiently learn a near-optimal policy with sample
complexity polynomial in the feature dimension d, and independent of
the size of the state space <cit.>; in this regard,
can be thought of as a low-dimensional representation that enables
sample-efficient RL. Following
<cit.>, we consider the challenging setting in
which both and are unknown to the
learner. This formulation generalizes well-known frameworks such as
the Block MDP (BMDP) model <cit.>,
and necessitates the use of representation
learning: the agent must learn an embedding that approximates
as it explores the environment, and must use this learned embedding
to drive subsequent exploration. This form of function approximation allows
for great flexibility, as can be an arbitrary, nonlinear
function of the state; in practice, it is common to model as a neural net <cit.>.
The is perhaps the simplest MDP structure that demands
systematic exploration and nonlinear function approximation while allowing for a continuum of states, yet understanding of
efficient algorithm design for this model is surprisingly
limited. Existing algorithms suffer from at least one of the following drawbacks:
* Computational intractability <cit.>.
* Strong modeling assumptions (e.g., ability to model
[h+1](·), which facilitates application of model-based
RL techniques)
<cit.>;
in this work, we aim for model-free methods that only require
learning .
* Restrictive structural assumptions (e.g.,
non-negativity or latent variable
structure for the embeddings in (<ref>)) <cit.>.
At the root of these limitations is the complex interplay between
exploration and representation learning:
the agent must learn a high-quality representation to guide
in exploring
the state space, but learning such a representation requires gathering
diverse and informative data, which is difficult to acquire without
having already explored the state space to begin with. Overcoming
this challenge—particularly where computational efficiency is
concerned—requires (1) representation learning procedures that lead to sufficiently expressive
representations for downstream applications, (2) efficient exploration procedures that are
robust to errors in learned representations, and 3) understanding the
interaction between these procedures, which must be interleaved. In
this work, we propose an algorithm that addresses each of these challenges, as detailed below.
Contributions
We provide the first provably computationally efficient and model-free
algorithm for general Low-Rank MDPs.
Our algorithm, (“Volumetric Exploration”), uses
the notion of a generalized optimal design for the
embedding as an efficiently computable
basis for exploration, and combines this with a minimax representation
learning objective <cit.>. interleaves exploration with representation learning in a layer-wise
fashion, learning a new representation at each layer h using exploratory
data gathered at previous layers, then uses this representation to
facilitate computation of a collection of exploratory policies (a
policy cover), which act as an approximate optimal design
for the features at layer h+1, ensuring good coverage for subsequent
iterations. is simple and modular, and its analysis is
surprisingly compact given the greater generality compared to prior
work
<cit.>.
accommodates general-purpose function approximation
to learn the representation (e.g., neural
nets or other flexible classes), and is efficient whenever a certain minimax
representation learning objective <cit.> can be solved efficiently for the
function class of interest. Compared to efficient algorithms from
prior work, : (1) is model-free (i.e., only requires access to a function class
Φ capable of modeling , and does not need to model
), and (2) applies to general Low-Rank MDPs, removing
the need for strong assumptions such as reachability or non-negativity of the feature embeddings
(so-called latent variable structure); see
<Ref>).
As a secondary benefit, the algorithm is reward-free.
Our analysis carefully combines several new techniques, including (1) a new reduction from optimal design
computation to policy optimization based on the Frank-Wolfe method, and (2) a new analysis of a minimax representation learning
objective introduced in <cit.>,
which leads to faster rates and shows for
the first time that this objective can lead to meaningful guarantees in general Low-Rank
MDPs without latent variable structure.
The algorithm follows a simple and modular template. To highlight this, we use the same template to give a
variant of the algorithm, (<ref>), which
leverages barycentric spanners <cit.> for
exploration, and obtains a tighter
sample complexity bound under an additional reachability assumption; see <ref>.
Organization
sec:setting formally introduces the model and the online
reinforcement learning framework we consider. In
<ref>, we highlight challenges faced
by previous approaches, introduce our main algorithm, , and
show how it overcomes these challenges, and then present its main
sample complexity guarantee. We conclude
with discussion in <ref>.
§ PROBLEM SETTING
§.§ Model
We work in an episodic, finite-horizon reinforcement learning framework, where H∈ denotes the horizon. A <cit.> is a tuple =(,, ()_h∈ [H],([h])_h∈[H],) consisting of a state space , action space with =A, distribution over initial states ∈Δ(), and mappings :→^d and : ×→^d.[We emphasize that neither [h] nor is known to the agent, in contrast to the linear MDP setting <cit.>.]
Beginning with _1∼, an episode proceeds in H steps, where for each step h∈H, the state _h evolves as a function of the agent's action _h via
_h+1∼T_h(·|_h,_h),
where T_h is a probability transition kernel, which is assumed to factorize based on and . In detail, we assume that there exists a σ-finite measure ν on such that for all 1 ≤ h ≤ H-1, and for all x ∈ and a ∈, the function x' ↦(x')^⊤(x, a) is a probability density with respect to ν (i.e. the function is everywhere non-negative and integrates to 1 under ν). For any '⊆, the probability that _h+1∈' under _h+1∼T_h(·|x_h,a_h) is then assumed to follow the law
T_h('|x_h,a_h) = ∫_'(x)^⊤(x_h, a_h) ν(x).
For notational compactness, we assume (following, e.g., <cit.>) that the MDP is layered so that = _1∪…∪_H for _i ∩_j=∅ for all i≠ j, where _h⊆ is the subset of states in that are reachable at layer h∈[H]. This can be seen to hold without loss of generality (modulo dependence on H), by augmenting the state space to include the layer index.
Our formulation, in which the transition dynamics (<ref>) are stated with respect to a base measure ν, are a rigorous generalization of formulations found in previous works <cit.>, which tend to implicitly assume the state space is countable and avoid rigorously defining integrals. We adopt this more general formulation to emphasize the applicability our results to continuous domains. However, in the special case where state space is countable, choosing ν as the counting measure yields T_h('|x_h,a_h) = ∑_x∈'(x)^⊤(x_h, a_h), which is consistent with prior work.
Policies and occupancy measures
We define =*π:→Δ() as the set of all randomized, Markovian policies. For a policy π∈, we let ^π denote the law of (_1,_1),…,(_H,_H) under _h∼π(_h), and let ^π denote the corresponding expectation. For any '⊆_h, we let _h^π[']^π[_h ∈'] denote the marginal law of _h under π. For x∈_h, we define the occupancy measure d^π(x) _h^π/ν(x) as the density of ^π_h with respect to ν.
§.§ Online Reinforcement Learning and Reward-Free Exploration
We consider a standard online reinforcement learning framework where the Low-Rank MDP is unknown, and the learning agent interacts with it in episodes, where at each episode the agent executes a policy of the form π:→Δ() and observes the resulting trajectory (_1,_1),…,(_H,_H).
While the ultimate goal of reinforcement learning is to optimize a policy with respect to a possibly unknown reward function, here we focus on the problem of
reward-free exploration, which entails learning a collection of policies that almost optimally “covers” the state space, and can be used to efficiently optimize any downstream reward function <cit.>. To wit, we aim to construct an policy cover, a collection of policies that can reach any state with near-optimal probability.
For α,∈(0,1], a subset Ψ⊆ is an (α,)-policy cover for layer h if
max_π∈Ψ d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
Informally, an (α,)-policy cover Ψ has the property that for every state x∈ that is reachable with probability at least ·[h](x), there exists a policy in Ψ that reaches it with probability at least α··[h](x). We show (<ref>) that given access to such a policy cover with α =(, d^-1 ,A^-1), it is possible to optimize any downstream reward function to () precision with polynomial sample complexity.
def:polcover101 generalizes the notion of approximate policy cover used by <cit.> for the Block MDP setting; as in that work, the definition allows one to sacrifice states for which the maximum occupancy is small, which is necessary in the absence of reachability-style assumptions <cit.>. Compared to <cit.>, we replace the Block MDP condition max_π∈ d^π(x) ≥ by max_π∈ d^π(x) ≥·[h](x). As our analysis shows, the latter condition turns out to be better suited to the ℓ_2 geometry of the model, and is sufficient for the purpose of optimizing downstream reward functions up to O() precision (<ref>).
Function approximation and desiderata
We do not assume that the true features ()_h∈[H] or the mappings ([h])_h∈[H] are known to the learner.
To provide sample-efficient learning guarantees we make use of function approximation as in prior work <cit.>, and assume access to a feature class Φ⊆{ϕ : ×→^d} that contains , for h∈[H-1].
[Realizability]
The feature class Φ⊆{ϕ : ×→^d} has ∈Φ for all h∈[H]. Moreover, for all ϕ∈Φ, x ∈, and a ∈, it holds that ϕ(x, a)≤ 1.
The class Φ may consist of linear functions, neural networks, or other standard models depending on the application, and reflects the learner's prior knowledge of the underlying MDP. We assume that Φ is finite to simplify presentation, but extension to infinite classes is straightforward, as our results only invoke finiteness through standard uniform convergence arguments.
Note that unlike model-based approaches <cit.>, we do not assume access to a class capable of realizing the features , and our algorithm does not attempt to learn these features; this is why we distinguish our results as model-free.
Beyond realizability, we assume (following <cit.>) for normalization that, for all h∈[H] and (x,a)∈_h×, *_h(x,a)≤1, and that for all g:_h→0,1,
*∫__h[h](x)g(x) ν(x)≤√(d).
For ∈(0,1), our goal is to learn an (α,)-policy cover with α= (,d^-1,A^-1) using
(d,A,H,logΦ,^-1)
episodes of interaction.
This guarantee scales with the dimension d of the feature map and the complexity logΦ of the feature class but, critically, does not depend on the size of the state space ; note that by <cit.>, dependence on both H and A= is necessary when is unknown. Given such a guarantee, we show in <Ref> that it is possible to optimize any downstream reward function to error with polynomial sample complexity.
Additional preliminaries
For any m,n ∈ℕ, we denote by [mn] the integer interval {m,…, n}. We also let [n] [1n]. For any sequence of objects o_1, o_2,…, we define o_m:n (o_i)_i∈[m n].
A partial policy is a policy defined over a contiguous subset of layers ℓr⊆H. We denote by ^ℓ:r{π⋃_h=ℓ^r _h →Δ()} the set of all partial policies over layers ℓ to r; note that ≡^1:H. For a policy π∈^ℓ:r and h∈ℓr, π(x_h) denotes the action distribution for the policy at layer h when x_h∈_h is the current state. For 1≤ t≤ h≤ H and any pair of partial policies π∈^1:t-1, π'∈^t:h, we define π∘_t π'∈^1:h as the partial policy given by (π∘_t π')(x_ℓ) = π(x_ℓ) for all ℓ<t and (π∘_t π')(x_ℓ) = π'(x_ℓ) for all ℓ∈ [t h]. We define π∘_t π' in the same fashion for π∈^1:ℓ for ℓ≥ t.
We use the _h∼π as shorthand to indicate that _h is drawn from the law ^π, and likewise for (_h,_h)∼π and so on. For a set of partial policies Ψ{π^(i) i ∈ [N]}, we define (Ψ) as the random partial policy obtained by sampling ∼([N]) and playing π^(). We define ∈ as the random policy that selects actions in uniformly at random at each layer.
We use *· to denote the Euclidean norm, *·_∞ to denote the supremum norm on functions, and let (r)⊆^d denote the Euclidean ball of radius r. We let _(r) be the Frobenius ball of radius r>0 in ^d× d. We denote by the set of positive semi-definite matrices in ^d× d, and by “≼” the corresponding partial order. For a vector v∈^d, we denote by v[i] its ith coordinate.
We refer to a scalar c>0 as an absolute constant to indicate that it is independent of all problem parameters and use (·) to denote a bound up to factors polylogarithmic in parameters appearing in the expression.
§ : ALGORITHM AND MAIN RESULTS
In this section, we present the algorithm. We begin by describing
challenges in deriving efficient, model-free algorithms using existing
approaches (<ref>). We then formally describe (<ref>) and build intuition as to how it is able to overcome these challenges, and finally state our main sample
complexity guarantee (<ref>).
§.§ Challenges and Related Work
Designing algorithms with provable guarantees in the Low-Rank MDP setting is challenging because of the complicated interplay between representation learning and exploration. Indeed, while there are many efficient algorithms for the so-called linear MDP setting where the feature maps ()_h∈[H] are known (removing the need for representation learning) <cit.>, these approaches do not readily generalize to accommodate unknown features. For Low-Rank MDPs, previous algorithms suffer from at least one of the following three drawbacks: (1) the algorithms are computationally inefficient; (2) the algorithms are model-based; or (3) the algorithms place strong assumptions on the MDP that are unlikely to hold in practice. To motivate the algorithm, we briefly survey these results, highlighting several key challenges in avoiding these pitfalls.
Let us first discuss the issue of computational efficiency. While there are a number of algorithms—all based on the principle of optimism in the face of uncertainty—that provide tight sample complexity guarantees for Low-Rank MDPs in reward-based <cit.> and reward-free <cit.> settings, these algorithms involve intractable optimization problems, and cannot be implemented efficiently even when the learner has access to an optimization oracle for the representation class Φ <cit.>. This intractability arises because these algorithms implement optimism via a “global” approach, in which the algorithm explores at each round by choosing the most optimistic value function in a certain version space of candidate value functions; optimizing over this version space is challenging, as it involves satisfying non-convex constraints with a complicated dependence on the learned representation that are coupled globally across layers h∈H.
To avoid the intractability of global optimism, several works have restricted attention to a simpler model-based setting. Here, in addition to assuming that the feature maps ()_h∈[H] are realizable with respect to Φ, one assumes access to a second feature class Υ capable of modeling the mappings ()_h∈[H]; this facilitates direct estimation of the transition probability kernel T_h(·|x,a). For the model-based setting, it is possible to efficiently implement certain “local” forms of optimism <cit.>, as well as certain non-optimistic exploration techniques based on policy covers <cit.>. For example, one can estimate features using maximum likelihood, and then apply efficient algorithms for the known-feature setting with the estimated features plugged-in <cit.>; here, a key insight is that model-based estimation leads to strong distribution transfer guarantees for the learned features. As a result, there are now a number of efficient model-based algorithms <cit.>, some of which have been practically implemented <cit.>. Unfortunately, model-based realizability is a restrictive assumption, and falls short of the model-free guarantees we aim for in this work; indeed, in general, one cannot hope to estimate the feature map without sample complexity scaling with the number of states.[For example, in the special case of the Block MDP setting <cit.>, model-based realizability entails modeling a certain emission process, which is not required by model-free approaches.]
When one moves from model-based learning to model-free learning, representation learning becomes substantially more challenging—both for optimistic and non-optimistic approaches. Here, a key challenge is to develop representation learning procedures that are (1) efficient, yet (2) provide meaningful guarantees when the learned features are used downstream for exploration.
To our knowledge, the only proposal for a representation learning procedure satisfying both desiderata comes from the work of <cit.>, who introduced a promising “minimax” representation learning objective (described in detail in the sequel; cf. <ref>), which <cit.> subsequently showed to have encouraging empirical performance. However, to provide guarantees for this objective, both works place substantial additional restrictions on the low-rank factorization. In particular, <cit.> make the so-called latent variable assumption <cit.>, which asserts that and are non-negative coordinate-wise, and <cit.> further restrict to the Block MDP model <cit.>.
Non-negativity is a substantial restriction, as the best non-negative factorization can have exponentially large dimension relative to the best unrestricted factorization <cit.>. Beyond non-negativity, many prior works <cit.> require reachability assumptions, the weakest of which asserts that there exists η>0 such that for all x∈_h,
max_π∈ d^π(x)≥η·[h](x).
These works give sample complexity bounds that scale polynomially in η^-1, and do not give any guarantee when η=0; see <ref> for further background.[When specialized to tabular MDPs, reachability asserts that for each state x∈, there exists a policy that reaches x with probability at least η.] The source of both restrictions is the problem of how to quantify how close a learned representation ϕ is to the ground truth , which depends strongly on the downstream exploration strategy. In what follows, we show that with the right exploration strategy, this challenge can be ameliorated, but prior to our work it was unclear whether the minimax objective could lead to meaningful guarantees in the absence of non-negativity.
§.§ The Algorithm
Our algorithm, , is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. To describe the algorithm in detail, we slightly generalize <ref>.
For α,∈(0,1], a distribution P∈Δ() is an (α,)-randomized policy cover for layer h if
_π∼ P*d^π(x)≥α·max_π' ∈ d^π'(x) for all x∈_h such that max_π'∈Π d^π'(x)≥·[h](x).
If P is a randomized policy cover, then the set Ψ(P) is a policy
cover in the sense of <ref>, but is most
naturally described in terms of randomized policy covers, which allow
for non-uniform mixtures of policies. Critically, the randomized
policy covers used in have support size polynomial in d and H,
which allows them to be computed and represented efficiently.
For each layer h≥2, uses a randomized policy cover
Ph built at a previous iteration to perform K steps of
interleaved representation learning and exploration. Starting from
h,0Ph, for each step k∈K, first
invokes a subroutine,
(<ref>; deferred to <ref>) with the
randomized policy cover h,k-1 to produce a
feature map ϕh,k that approximates . Using
this feature map, the algorithm invokes a second subroutine,
(<ref> in <ref>) to produce a (sparsely
supported) policy distribution
Ph,k∈Δ() that acts as a generalized optimal design for the
estimated feature map ϕh,k, ensuring maximal coverage in
a certain sense; given this distribution, the algorithm defines
h,k=1/2k∑_ℓ=1^kPh,k +
1/2Ph and proceeds to step k+1. Once this process
completes, a new randomized policy cover for layer h+2 is formed via Ph+2=1/K∑_k=1^K∑_π∈(Ph,k)Ph,k(π)·_π∘_h+1. To
invoke the
subroutine, makes use of additional subroutines for policy optimization
(; <ref> in
<ref>) and estimation of certain
matrix-valued functionals (; <ref>
in <ref>). The use of multiple
(K>1) inner loop iterations within this scheme is
necessary to handle certain distribution shift
issues, which we will elaborate on momentarily.
We now describe
each component of the algorithm in detail,
highlighting how they allow us to overcome the
challenges in the prequel.
Generalized optimal design
At the heart of is the notion of a generalized
optimal design as an efficient basis for exploration. We
begin by defining a generalized optimal design for an abstract of
positive-semidefinite matrices ⊆.
Given a set ⊂ and parameters γ∈(0,1/d),
C≥1, we say that a distribution P∈Δ() is a
(C,γ)-generalized optimal design for if the matrix
M_PγI_d+_W∼P*W satisfies
sup_W∈(M_P^-1W) ≤ (1+C)d.
This definition generalizes the classical notion of G-optimal
design <cit.>, which corresponds to the
special case in which each W∈ is a rank-one matrix, and where γ=C=0.
The utility of generalized optimal designs for reward-free exploration is
highlighted in the following lemma.
Let h∈[H]. If a distribution P∈Δ() over policies is a
(C,γ)-generalized optimal design for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈},
then the distribution
P'=∑_π∈(P)P(π)·_π∘_h+1 is an
(α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
<Ref>, proven in <Ref>, shows that to compute a policy cover for layer h+2, it suffices to compute a distribution over policies that acts as a generalized optimal design for the set _h{^π[
(_h, _h) (_h, _h) ^]|π∈}⊆^d. Of course, even if is known, this observation is only useful if we
can compute a spanner without explicitly enumerating over the set
, since our goal is to develop an efficient
algorithm. In what follows, we will show:
* By applying the Frank-Wolfe method
<cit.> to a certain determinantal/volumetric objective,
it holds that for any ϕ∈Φ, a sparsely supported
generalized optimal design for the set {^π[
ϕ(_h, _h)ϕ(_h, _h) ^ ]|π∈} can be computed
efficiently whenever, for any M∈ with
*M_≤1, one can (approximately) solve policy optimization problems of the form
_π∈^π*ϕ(_h,_h)Mϕ(_h,_h)^.
* Given access to policy covers P1,…,Ph for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to the algorithm for policy
optimization (<ref>).
To handle the fact that is unknown, <ref>
uses the approach above to compute a generalized optimal design for the set {^π[
ϕh(_h, _h) ]|π∈}, where
ϕh∈Φ is a learned feature map. In what follows, we
first give a detailed overview of our optimal design computation approach, then show
how applies this approach to a feature map estimated via
representation learning.
Prior work <cit.> makes use
of elliptic planning objectives similar to the notion of optimal
design in
<ref>. An
important difference in our approach, which follows from the explicit
connection to optimal design, is that the right-hand side in
(<ref>) is bounded by an absolute (problem-dependent)
constant (d), and does not scale inversely proportional to the
target precision >0 or any sort of reachability parameter. This
property is essential to our reachability-free analysis.
Optimal design computation via approximate linear optimization
To describe generalized optimal design in , we take a brief detour
and consider an abstract approach to optimal design computation, which generalizes our problem. Suppose that we wish
to compute a spanner for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set
. The set (which will be set to when we apply this
framework to RL) may be exponentially large, and cannot be efficiently enumerated. In addition, given z∈, we
cannot explicitly compute W^z, and have to settle for a noisy approximation.
To allow for optimal design computation, we assume access to two
oracles for the set , a linear optimization oracle :∩_(1)→ and
an index-to-matrix oracle :Δ()→. We assume
that for some _, _>0:
* For all M∈ with *M_=1, the output
ẑ_M(M) satisfies
(MW^ẑ_M) ≥sup_z∈(MW^z) - _.
* For all P∈Δ(), the output W_P(P)
satisfies
W_P - _z∼P*W^z_≤_.
Given access to oracles and with _=(γ) and _=(γ^2), the algorithm
(<ref>) computes a (C,γ)-approximate spanner for
using *γ^-2C^-2 d^-1ln (1 + 1/γ)
oracle calls. can be viewed as an application of the Frank-Wolfe
algorithm <cit.> for first-order optimization to
maximize the determinantal/volumetric objective
F(P) log(γ I_d + _z∼ P[W^z]),
which is inspired by the well-known duality of G-optimal and D-optimal
design <cit.>. Frank-Wolfe is well-suited to
our setting because it produces a sparsely supported
distribution P∈Δ(), with the sparsity bounded by the
number of iterations (d,γ^-1) and independent of
. This feature is critical for computational efficiency
when applied to RL, as the set = is too large for one to even
represent a general distribution P∈Δ() efficiently.
Representation learning
Ideally, we would
like to use to construct a generalized optimal design for the set {^π[_h(_h, _h) _h(_h, _h)^]|π∈} with =.
Because we do not have access to _h, each inner loop iteration
k∈K in <ref> instead applies with {^π[ϕh,k(_h, _h)]|π∈},
where ϕh,k is a learned
representation. We now describe how the feature map
ϕh,k is learned, then show how to use these learned features to
efficiently implement the oracles (·) and (·).
To learn representations for layer h, we use the algorithm (<ref>),
which was originally introduced in
<cit.>. When invoked in each inner loop
iteration k∈K via ϕh,k = (h, ,Φ,
Ph,k-1,n_) (<ref>), the
algorithm gathers a
collection of triples (_h, _h, _h+1) by rolling in to
_h with a policy sampled from the randomized policy cover h,k-1 and selecting _h
uniformly at random, then observing the resulting state _h+1. Using this dataset, the algorithm
solves a sequence of adversarial training sub-problems
(<ref> of <ref>) which involve
the feature class Φ and an auxiliary discriminator class :
→. As we discuss in detail in the sequel, these
sub-problems, described in (<ref>),
are amenable to standard gradient-based training methods. The
sub-problems are designed to approximate the following “idealized”
min-max-min representation learning objective:
ϕh,k∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k-1^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
The intuition for
this objective lies in the fact that in a Low-Rank MDP, for any function f:→, the mapping (x,a)↦[ f(_h+1)
|_h=x, _h=a ] is linear in
_h(x, a). Thus, if is sufficiently expressive, we may
hope that any ϕh,k which solves (<ref>) will approximate
well. We adopt the simple discriminator class neurips= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
= {. x ↦max_a∈θϕ(x, a) | θ∈(1), ϕ∈Φ}.
We show that solving
(<ref>) with this choice for , which is slightly
simpler than those considered in <cit.>, yields an approximation
guarantee for ϕh,k that is suitable for downstream use in
optimal design computation.
To facilitate an analysis of that does not require reachability assumptions, we use
slightly different parameter values for than in
<cit.>, and provide a tighter sample
complexity bound (<ref>) which may be of independent interest.
In more detail, prior work shows that the algorithm solves
a variant of (<ref>) with
w∈(d^1/2·(^-1)), where >0 is the desired
bound on mean-squared error. Due to the polynomial dependence on
^-1, such a guarantee would lead to vacuous
guarantees when invoked within our analysis of . Our improved
analysis of , which is based on a determinantal potential
argument, shows that w∈((d)) suffices. A secondary benefit of our improved bound is a faster rate with
respect to the number of trajectories.
Putting everything together Having learned ϕh,k
using , each inner loop iteration k∈K of applies with {^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]|π∈},
=, C = 2, and γ chosen as a function of the
target accuracy; that is, we use the learned
representation ϕh,k as a plug-in estimate for the true representation
.[Though the policies produced by the
algorithm may not necessarily induce an optimal design for _h= {^π[
(_h, _h) ]|π∈} (this would
require a stronger coordinate-wise approximation guarantee, does not
necessarily follow from <ref>), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ ϕh,k(_h, _h)^M ϕh,k(_h, _h)]
for a given matrix M∈∩_(1), and implementing entails estimating
the second moment matrix
^π[ϕh,k(_h, _h) ϕh,k(_h, _h)^]
for a given policy π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>), which simply samples trajectories according to π and returns the sample average of ϕh,k(_h, _h) ϕh,k(_h, _h)^.
To
implement (θ), we appeal to (<ref>). , given an arbitrary reward function r_1:h:×→ and a function class ⊆{g:
×→} capable of realizing all possible value
functions induced by these rewards, can use the policy covers
P1,…,Ph to efficiently compute a policy = (h,r_1:h, ,
P1:h, n) that approximately solves neurips_π∈^π[∑_t=1^h r_t(_t,_t)],
_π∈^π[∑_t=1^h r_t(_t,_t)],
and does so using polynomially many episodes; see <ref> for
details and formal guarantees.[This is the main
place where the analysis uses the inductive hypothesis
that P1:h are policy covers.] Thus, implementing (M)
for M∈∩_(1) is as
simple as invoking with the rewards neurips
r_h(x, a; M) = ϕh,k(x,a)^⊤ Mϕh,k(x,a), and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;M){[ ϕh,k(x,a)^⊤ Mϕh,k(x,a), for
t=h,; 0, otherwise. ].
Addressing distribution shift
With this, we have all the
ingredients needed for optimal design computation, and can prove that
Ph,k is an approximate optimal design with respect to
ϕh,k. However, we not quite done, due to the issue of
distribution shift, which motivates the use of multiple (K>1)
inner loop iterations within . In particular, while the
objective in (<ref>) ensures that ϕh,k approximates
well under Ph,k-1, the representations may be far
away from one another under the new distribution Ph,k produced
when we invoke with ϕh,k.[If Ph were
an exact (i.e., (α,0)-) policy cover, this would be a
non-issue. However with an approximate policy cover, which is all that
one can for in the absence of reachability, distribution shift must
be addressed.] To address this issue, we use a potential argument <cit.>
to show that as long as K is chosen to be sufficiently large, there exists
k^⋆∈*K such that ϕh,k^⋆
(approximately) enjoys a stronger on-policy approximation guarantee:
ϕh,k^⋆∈_ϕ∈Φsup_f ∈inf_w∈(2d^1/2)_π∼h,k^⋆^π∘_h[(
ϕ(_h, _h)w - *f(_h+1)|_h,_h)^2
].
This suffices to prove that the distribution Ph+2 constructed
in is an approximate policy cover
for layer h+2.
§.§ Main Guarantee for
The following result is the main sample complexity guarantee for (<ref>).
Let δ, η∈(0,1), and suppose that realizability holds (<ref>). If = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are a
(η^3/· d^6 A^2,)-randomized policy cover with probability at least
1-δ, where 4 H d^3/2η.
The total number of episodes used by is at most:
(A^4 d^20 H^17 (d + ln (|Φ|/δ))· 1/^14).
The next corollary follows immediately from the definition of a policy cover (<ref>).
Consider the setting of <ref> and let P1:H be the distributions
produced by . Then, under the same success event as in <ref>, the collection of policies Ψ1,…, ΨH, where Ψh Ph for each h∈[H], are a (η^3/· d^6 A^2,)-policy cover in the sense of <ref>, where η/(4 H d^3/2).
<ref> is the first provable, model-free sample complexity
guarantee for general Low-Rank MDPs that is attained by an
efficient algorithm. Prior to our work, all efficient model-free algorithms required non-negative features (latent
variable structure), reachability, or stronger assumptions
<cit.>; see <ref>.
While our guarantee is polynomial in
all relevant problem parameters, improving the dependence further
(e.g., to match that of the best known inefficient algorithms) is
an interesting direction for future research.
Application to reward-based RL
By using the policy cover produced by within (<ref>),
we can optimize any downstream reward function to error using
(d,A,H,logΦ,^-1) episodes. See
<ref> for details. A technical novelty here compared to, e.g. <cit.> (who also used and policy covers to optimize downstream reward functions), is in proving that our notion of approximate policy cover (<ref>) is sufficient for downstream reward optimization in s.
Efficiency and practicality is simple and practical. Defining _(ϕ, w, f) ∑_(x, a,
x')∈ (ϕ(x,a)^⊤ w - f(x'))^2, where
is a dataset consisting of (_h,_h,_h,_h+1)
tuples, the algorithm is provably efficient whenever the adversarial
objective
ft∈_f∈max_ϕ̃∈Φ{min_w∈(3d^3/2)_(ϕt, w, f) - min_w̃∈(2d^1/2)_(ϕ̃, w̃, f) },
in <ref> of (<ref>),
can be implemented efficiently. This objective was also assumed to be efficiently solvable in
<cit.> and was empirically shown to
be practical in <cit.>.[In
addition to <ref>, also solves the
objective
ϕt+1∈_ϕ∈Φmin_(w_1,…,w_t)∈(2√(d))^t∑_ℓ=1^t _(ϕ,w_ℓ,fℓ)
in <ref> of <ref>. Compared the
adversarial objective in (<ref>), this objective is
simpler, and only
requires minimization.] Note that both of the objective
is amenable to standard gradient-based optimization techniques, and allows
the class to be over-parameterized. While a detailed
experimental evaluation is outside of the scope of this paper, we are
optimistic about the empirical performance of the algorithm in light
of the encouraging results based on the same objective in
<cit.>.
Outside of representation learning, the only computational overhead in is
in the subroutine, which has runtime polynomial in all parameters. Indeed,
requires only polynomially many calls to the linear optimization oracle, instantiated as , which is
efficient whenever standard least-squares regression problems based on
the class Φ can be solved efficiently, analogous to
<cit.>. The
distributions Ph,k returned by each invocation of have
support size (d,^-1), and hence can be represented with
polynomial space memory; it follows that all of policy
distributions maintained throughout the execution of <ref> have
polynomial support size as well.
Under the setting of <ref>, if = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the distributions P1:H
produced by (Φ, η, , δ) are such that max_h∈[H]| Ph| ≤· d^7/η^4.
Analysis and proof techniques
A significant challenge overcome by the proof of <ref> (given
in <ref>) is to show that
—despite being non-optimistic—succeeds in the absence of
reachability-type assumptions. To achieve this, we use a novel
adaptation of the extended
MDP technique introduced in the recent work
<cit.> in the context of Block MDPs. This
technique allows us to analyze in a modified version of the
true MDP which emulates certain properties of reachability; see
<ref> for details. Within the extended MDP, the crux of
the proof is to show that the
representation learning guarantee in (<ref>) is strong
enough to ensure that the downstream optimal design computation in
succeeds. It is straightforward to show that optimal design
computation would succeeds if we have access to an estimated
representation that ϕh,k that approximates
point-wise (i.e., uniformly for all (x,a) pairs), but the key challenge is that the guarantee in
(<ref>) only holds on average under the roll-in
distribution h,k-1. Prior works that make use of the same representation
learning objective ( <cit.> and
<cit.>) make use of additional structural assumptions
(non-negativity of the factorization for , and Block MDP
structure for ) to facilitate change-of-measure arguments
that address this issue. We avoid such assumptions by inductively appealing to
the optimal design objective in (<ref>), which provides a
stronger coverage guarantee compared to elliptic objectives from prior
work; see <ref>. While the high-level schema for the
proof is quite simple, there are
several subtle technical challenges that arise in analyzing in the
extended MDP, including:
* Showing that succeeds when invoked within , despite
the lack of uniform coverage.
* Proving that gives a sufficiently strong
approximation guarantee even when the weights used by the algorithm
are kept uniformly bounded throughout training; see <ref>.
* Addressing distribution shift that occurs when the updates policies using the
representations produced by .
See <ref> for
details.
§.§ Stronger Guarantees under Reachability:
The algorithm is appealing in its simplicity and
modularity. To highlight this, we use the same template to give a variant of the
algorithm, (<ref>), which obtains a tighter
sample complexity bound whenever a reachability assumption is satisfied.
Concretely, we make the following assumption.
[η-reachability]
For any h∈[H] and x∈_h,
max_π∈ d^π(x)≥η·[h](x).
<ref> generalizes and subsumes all
previous reachability-like conditions of which we are aware
<cit.>. Notably,
reachability is implied by the notion of feature
coverage <cit.> (used in the context of
transfer learning in Low-Rank MDPs), which asserts that
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤])
≥η, for some η>0. It is also implied by
explorability <cit.>, which is
similar to feature coverage, but involves the first moments of
. Our reachability assumption is also weaker than
the notion used in <cit.>
under the latent variable model, and generalizes the
notions of reachability for BMDPs <cit.>. See <ref> for details, as well as an exponential separation between <ref> and analogous assumptions in <cit.>.
follows the same template as , with two
differences. First, we remove the inner loop (which corresponds to
setting K=1 in ). Second, and more importantly, the subroutine is replaced
with a new subroutine, . Instead of computing an optimal
design, computes an alternative basis for exploration known as
a barycentric spanner <cit.>. is
an error-tolerant variant of a classical spanner computation
algorithm of <cit.>, and may be of independent
interest; we use the algorithm to compute a spanner for learned feature maps via reduction to policy
optimization. The sample complexity of improves upon ,
but its analysis leverages reachability. See <ref> for a detailed overview.
The main sample complexity guarantee for is as follows.
Let δ∈(0,1) be given, and suppose that realizability holds (<ref>) and that reachability (<ref>) is satisfied with parameter η>0. If = η/36 d^5/2 and = (A,H,d,ln
(|Φ|/δ)) is sufficiently large, then the policies Ψ1:H
produced by (Φ, , , δ) are a
(1/4 Ad,0)-policy cover with probability at least
1-δ.
The total number of episodes used by is at most:
( A^4 d^9 H^4 (d + ln (|Φ|/δ))· 1/η^2).
The sample complexity bound in <ref> scales
with the reachability parameter η as η^-2, which
significantly improves upon the dependence on the accuracy parameter
in <ref>. The dependence on the
dimension d is also tighter. We
find this result to be notable in its own right, as even in the
presence of similar reachability assumptions, all efficient model-free
algorithms in prior work required non-negative features (latent
variable structure) or stronger assumptions
<cit.>.
A secondary benefit of lies in memory: The algorithm
maintains policy covers with support size (d,^-1), while
the policy covers used in have support size (d),
which is independent of the target accuracy.
The proof of <ref> is similar to that of
<ref>, but is somewhat simpler, and does not require
appealing to the extended MDP analysis of
<cit.>. A useful feature of our proof is to show that the notion of
reachability in <ref>, which generalizes and
extends all previous reachability conditions in the and Block
MDP literature <cit.>,
is sufficient to build an exact (i.e., (α,0)-) policy cover. We
anticipate that this observation will find broader use.
§ DISCUSSION
Our work shows for the first time how to achieve efficient, model-free
exploration in general Low-Rank MDPs. On the technical side, our
results leave open a number of interesting technical questions,
including (1) regret (as opposed to PAC) guarantees, and (2) matching the minimax rate achieved by
inefficient algorithms using an efficient
algorithm. empirical evaluation?
More broadly, our work highlights the power of non-optimistic
algorithms that explore by building policy covers. In light of this, perhaps the most interesting question
is how to extend our techniques to more general function approximation
settings beyond the Low-Rank MDP model; this will likely entail
replacing the notion of optimal design with a more general form of
exploration basis.
§.§ Acknowledgements
We thank Noah Golowich, Dhruv Rohatgi, and Ayush Sekhari for
several helpful discussions. ZM and AR acknowledge support from the ONR through awards N00014-20-1-2336 and N00014-20-1-2394, and ARO through award W911NF-21-1-0328. AB acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
§ ADDITIONAL RELATED WORK
In thi section, we discuss relevant related work not already covered.
Block MDPs
A particularly well-studied special case low-rank MDPs with the latent variable assumed in <cit.> (defined in <Ref>) is the Block MDP (BMDP) model <cit.>. For this setting, <cit.> provide algorithms that conduct exploration in a provably oracle-efficient manner under a reachability assumption. This reachability assumption was removed by subsequent work of <cit.> (with a suboptimal rate) and <cit.> (with optimal error dependence). These works are tailored to the BMDP model, and it is unclear whether it is possible to extend them to general low-rank MDPs.
Barycentric spanners
<cit.> consider a variant of the framework in which we are given a class Υ that realizes the
next-state feature map , but do not have access to a class
Φ for the feature map , which is unknown. Their
algorithm, like , is based on barycentric spanners, though the algorithm
design considerations and analysis are significantly
different. Notably, their algorithm is not computationally efficient,
and their analysis takes advantage of the fact that realizability of
facilitates estimation of the occupancies d^π(·)_π∈ in ℓ_1-error. Barycentric spanners were also in the work of <cit.> for reinforcement learning in Partially Observable MDPs (POMDPs). Their analysis is substantially different from ours, and their algorithm appeals to the barycentric spanner computation approach in <cit.> in an off-the-shelf fashion.
Frank-Wolfe method in RL
Similar to our work, <cit.> make use of the Frank-Wolfe method for policy cover computation, but their algorithm is tailored to the known-feature (linear MDP) framework, and the design and analysis are quite different.
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISVOX
<ref> of the appendix contains the proof of our main
result, <ref>, as well as other proofs. This
section is organized as follows:
* <ref> contains the analysis of <ref>.
* <ref>, <ref>, and <ref> contain results we rely on in the proof of <ref>. In particular, <ref>, <ref>, and <ref> provide generic guarantees for the subroutines (<ref>), (<ref>), and (<ref>) of (<ref>), respectively.
* In <ref>, we show how an approximate policy cover can be used to optimize downstream reward functions.
* In <ref>, we present some useful structural results concerning the extended MDP introduced in <ref>.
* Finally, <ref> contains a set of helper
results used throughout the analysis.
§ ANALYSIS: PROOF OF THM:VOXMAIN
In this section, we present the full proof of the main guarantee for (<ref>). In <ref>, we define key concepts needed for the analysis. <ref>, <ref>, and <ref> give guarantees for (<ref>), (<ref>), and (<ref>) as instantiated within . <ref> gives guarantees for the subroutine within . We then combine these results in <ref> to prove <ref>.
§.§ Extended Low-Rank MDP and Truncated Policies
In this section, we present two tools, the extended MDP and a truncated policy class, that will be used throughout the analysis of , and facilitate an analysis that does not require reachability assumptions. The definitions we give generalize analogous definitions given in <cit.> for the special case of Block MDPs, though the generalization to the low-rank MDP setting is non-trivial.
Extended MDP As in <cit.>, we define the extended MDP to be the result of augmenting the true MDP by adding a set of H terminal states _1:H, and a terminal action with the property that taking from any state at layer h∈ [H-1] leads to _h+1 deterministically, and any action in ∪{} at latent state _h transitions to _h+1 deterministically. To express as a low-rank MDP, we increase the feature dimension by 1. First, for any ϕ∈Φ, we define the extension
ϕ̅(x,a) = {[ [ϕ(x,a)^⊤, 0]^⊤∈^d+1, ∀ a∈, ∀ x∈,; e_d+1∈^d+1, a = , ∀ x∈,; e_d+1∈^d+1, ∀ a∈, x ∈{_1,…, _H}, ]. with ϕ̅^⋆ denoting the extension of ϕ^⋆. We similarly define[h](x) = {[ [[h](x)^⊤, 0]^⊤∈^d+1, ∀ x∈,; e_d+1∈^d+1, x=_h, ].
for h∈[H]. With these definitions, we formally define =(∪{_1,⋯, _H}, ∪{}, ρ, ([h])_h∈[H], (ϕ̅_h^⋆)_h∈[H]) as the extended MDP, which one can verify is indeed a low-rank MDP in d+1 dimensions.
We let be the set of all randomized Markov policies in , with the convention that π(_h)= for all π∈ and h∈ [H]. For any policy π→, we extend it to ∪{_1, …, _H} by taking π(_h)= for all h∈[H]. Moving forward, for any h∈[H], we let _h _h ∪{_h}, and define =∪.
We denote expectations and probability laws for trajectories in by and , respectively, and for any '⊆_h, we let _h^π[']^π[_h ∈'] denote the induced law of _h under a policy π in . Furthermore, for any x∈_h, we define the occupancy measure ^π(x) _h^π/ν̅(x) as the density of ^π_h with respect to ν̅= ν +∑_h∈[H]𝕀__h.
We define Φ be the set of all extended feature maps (as in (<ref>)) for ϕ∈Φ. In some proofs, it will be convenient to work with the restriction of the extended feature maps to their first d coordinates; for any ϕ∈Φ, we define
ϕ̃(·,·) (ϕ̅(·,·)[1], …, ϕ̅(·,·)[d])^⊤.
Finally, we the extend the notion of a policy cover to the extended MDP as follows.
For α∈(0,1], η≥ 0, a distribution P∈Δ() is a (α, η)-randomized policy cover relative to Π⊆ for layer h in if
_π∼ P [^π(x)] ≥α·max_π'∈Π^π'(x), for all x∈_h such that max_π'∈Π^π'(x)≥η·[h](x).
Truncated policy class
Next, we introduce the notion of the truncated policy class, generalizing <cit.>. We begin with some preliminary definitions.
For any h ∈ [H], given a collection of policies Π'⊆, we let
_h(Π') {ϕ̃^⋆,π_h|π∈Π'}, where ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)].
Using this, we define the notion of η-reachable states relative to Π'.
For h∈[H] and a policy class Π'⊆, we define the set of η-reachable states at layer h relative to the set Π' as:
_h, η(Π') {x∈_h |∃ u ∈_h-1(Π') : [h](x)^⊤ u ≥[h](x)·η}.
Given a parameter η>0, we now define the truncated policy class _η inductively as follows: Let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise. ].
Finally, we define _η_H,η.
As in <cit.>, the utility behind the extended MDP and truncated policy class is as follows:
* While the extended BMDP does not necessarily enjoy the reachability property (<ref>), it emulates certain properties of reachable MDPs, but only if we compare performance to policies in _η.
* For all reward functions of interest, the best reward that can be achieved by a policy in _η is close to what can be achieved using arbitrary policies in .
§.§ Proof Overview
The proof of <ref> is inductive. For fixed h, the inductive hypothesis is that the distributions over policies P1:h+1 produced by satisfy the property:[By extending policies in to in the fashion described in <ref>, the distributions P1:h can be viewed as distribution over policies in .]
P1,… Ph+1 are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through h+1 in ,
where K is defined as in <ref>. Assuming the inductive hypothesis holds, we prove that with high probability, the distribution Ph+2 is a (η/32 dK A, η)-randomized policy cover relative to _η in for layer h+2. This inductive hypothesis is primarily used to show that , as invoked in <ref> is a valid choice for the oracle required by (that is, implements approximate linear optimization over = {^π[ ϕ(_h, _h)ϕ(_h, _h)^⊤] |π∈}, for any choice of ϕ∈Φ), which is proven in <Ref>. With this established, we instantiate the guarantee for from <ref> with and set to the instances of (<ref>) and (<ref>) in , respectively. To conclude the proof of the inductive step, we combine the guarantee for and the guarantee for in <Ref> with a change of measure argument, also enabled by the inductive hypothesis that P1:h are approximate policy covers (i.e. (<ref>)). As in <cit.>, a key feature of the analysis is that we work with the extended MDP and truncated policy class throughout the proof, only passing back to the true MDP once the induction is complete and <ref> has been proven to hold for all layers H. To pass back to the true MDP, we use the following (proven in <ref>).
Let h∈ [H], α∈ (0,1), and η >0 be given.
If P∈Δ() is an (α,η)-randomized policy cover relative to _η for layer h in , then P is an (α/2,)-randomized policy cover relative to for layer h in the true MDP , where 4 H d^3/2η.
In <ref> [reps. <ref>] we show that [resp. ], as invoked in <ref>, instantiates the approximate linear optimization oracle [resp. index-to-matrix oracle ] required by . In <ref> and <ref>, we prove guarantees for the instantiations of and within , respectively. In <ref>, we conclude the proof of <ref>.
§.§ Guarantee for as a Subroutine for
We begin by showing that , as invoked in <ref>, instantiates the approximate linear optimization oracle required by . In particular, we fix a layer h∈[H] and assume that P1:h+1 satisfy (<ref>) and apply the generic guarantees for given <Ref>.
For M ∈∩_(1) and ϕ∈Φ, define function classes '_1:h(M,ϕ) as follows:
'_t(M,ϕ) {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(√(d))}, ∀ t ∈[h-1] and '_h(M,ϕ) {r'_h(·,·; M,ϕ)} ,
where we define reward functions r'_1:h(·,·;M, ϕ) by:
∀ (x,a)∈×, r'_t(x,a;M,ϕ){[ ϕ(x,a)^⊤ M ϕ(x,a), for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show will that for any M ∈∩_(1) and ϕ∈Φ, the output
= (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n)
satisfies the property that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤ M ϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + ,
with high probability once n≥ 1 is sufficiently large; recall that ϕ̃ is the restriction of to its first d coordinates, with defined as in <ref>.
Note that we can equivalently formulate (<ref>) as, for fixed M ∈∩_(1) and ϕ∈Φ, maximizing the sum of the reward functions r'_1:h(·,·;M, ϕ) in (<ref>).
Note that this matches the choice of reward functions in (<ref>) at iteration h, with ϕ = ϕh,k, the feature map returned by in <ref>.
We first verify that the function classes '_1:h(M,ϕ) realize the reward functions specified in (<ref>) in the sense of <Ref>.
For any ϕ∈Φ and M∈∩_F(1), under <ref>, the function classes '_1:h(M,ϕ) in (<ref>) realize the reward functions in (<ref>) in the sense of <ref> (in the true MDP). Furthermore:
* All functions in '_1:h(M,ϕ) take values in [-√(d), √(d)].
* max_t∈[h]ln_'_t(M,ϕ)()≤ln |Φ|+ d ln (√(d) /), where we recall that _() denotes the -covering number for a function class in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and M∈∩_(1), and let r'_t(·,·)≡ r'_t(·,·; M, ϕ) and _t'_t'(M,ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈'_h. For t<h and any π∈^t+1:h, we have from the low-rank structure that for any (x,a)∈_t×, the Q-function Q^π_t satisfied
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, note that for all y∈_t+1,
0≤^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ≤r'_t(·, ·)_∞,
≤M_·sup_x∈_t,a∈ϕ(x,a)^2, (by Cauchy-Schwarz)
≤ 1,
where the last inequality follows by the fact that ϕ(·,·)≤ 1 for all ϕ∈Φ, and that M_≤M_≤ 1. Combining (<ref>) with the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(√(d)).
Thus, by (<ref>) we have
Q_t^π(·,·) ≡ϕ^⋆_t(·,·)^⊤ w_t, with w_t ∈(√(d)).
This, together with the fact that [t]∈Φ (by <ref>), implies that Q_t^π∈'_t, which establishes that '_1:h realize the rewards r'_1:h. The bound on the covering number _'_t() follows from a standard bound on the covering number of the ball (√(d)) <cit.>.
Combining <Ref> with <Ref> gives in the following bound on the quality of as an approximate linear optimization oracle over the space of policies.
Fix δ∈(0,1) and h∈[H]. Let M∈∩_(1), ϕ∈Φ, and be the output of when given input (h, r'_1:h(·, ·;M,ϕ), '_1:h(M,ϕ), P1:h, n), where
* The reward functions r'_1:h(·, ·;M,ϕ) are as in (<ref>).
* The function classes '_1:h(M,ϕ) are as in (<ref>).
* The distributions P1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈_η^π[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] ≤^[ ϕ̃(_h, _h)^⊤Mϕ̃(_h, _h) ] + _(n,δ),
where _(n,δ) cH d A√(K η^-1 n^-1 (d ln (n d^1/2)+ln (|Φ|/δ))) and c>0 is a sufficiently large absolute constant.
§.§ Guarantee for as a Subroutine for
We now state a performance guarantee for the subroutine (<Ref>), which simply estimates the second moment of the feature embedding of (_h, _h) under policy π by sampling sufficiently many trajectories and taking the empirical second moment. The following result shows that is a valid choice for the subroutine passed to within .
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h= (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and, with probability at least 1-δ,
M_h - ^π[ϕ(_h,_h)ϕ(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and c>0 is a sufficiently large absolute constant.
Let ϕ∈Φ and π∈. The claim that M_h ∈ follows by the fact that M_h is an empirical average of rank-1 matrices in .
Now, we show (<ref>). By a standard matrix concentration inequality (see for example <cit.>) and the fact that ϕ(x, a)ϕ(x, a)^⊤_≤ 1 for all x ∈ and a ∈ (following from ϕ(·,·)≤ 1), there exists an absolute constant c>0 such that with probability at least 1 - δ,
M_h - ^π[ ϕ(_h, _h) ϕ(_h, _h)^⊤]_≤ c ·√(log(1/δ)/n) .
Since policies in never take the terminal action, the guarantee in <ref> can also be expressed in the extended MDP as we do in the next corollary.
Let δ∈(0,1) h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output M_h of (h,ϕ(·,·)ϕ(·, ·)^⊤,π, n) (<ref>) satisfies M_h ∈ and for a sufficiently large absolute constant c>0, with probability at least 1-δ,
M_h - ^π[ϕ̃(_h,_h)ϕ̃(_h,_h)^⊤] _≤_(n,δ),
where _(n,δ) c ·√(n^-1·log( 1/δ)) and ϕ̃ is the restriction of to the first d coordinates; see <ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the instantiation of (<ref>) within .
For the rest of this section, we recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h∈[H-2], and that (Ph,k)_k∈[K] denote the distributions returned by within <ref> at iteration h∈[H-2]. We define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤].
In , we instantiate passing as and as . Combining <Ref> with the general guarantee of in <Ref>, we have the following result.
Let δ,γ∈(0,1) and K≥ 1 be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>, and that P1:h in <ref> satisfy (<ref>). Then, with probability at least 1-δ/3H:
* The number of iterations used by (<ref>) when invoked in <Ref> of <Ref> is at most T ⌈4/γ^2dlog( 1+1/γ)⌉.
* The distribution Ph,k output by is such that | Ph,k|≤ T and for Mh,k as in (<ref>), we have
sup_π∈_η^π[ ϕ̃h,k(_h,_h) ^2_( Mh,k)^-1] ≤ 3 d,
where we recall that ϕ̃h,k is the restriction of h,k to its first d coordinates, and h,k is the extension of ϕh,k to ; see <ref>.
By <Ref>, on the event that the instance of (resp. ) used by satisfy <Ref> with _=2γ/5 [_ = 2 γ^2/10], the two desiderata of the lemma hold; Here, we instantiate the guarantee in <ref> with C=2, which is what it is set to in <ref>. We claim that, with probability at least 1- δ/6 T H, each call to and to satisfies <Ref> with
=, _ref=_η, _=, and = {^π[ϕ̃h,k(_h,_h)ϕ̃h,k(_h,_h)^⊤] |π∈}.
Since and are called at most two times per iteration of , a union bound (see <ref>) concludes the proof contingent on the above claim.
We now prove the claim. First, note that the instance of that (<ref>) uses within <ref> is always of the form (see <ref> of <ref>):
(h, r_1:h(·, ·, M/M_), _1:h(M/M_), P1:h, n_)
with r_1:h and _1:h as in <Ref> and M ∈∖{0}; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh,k, which implies that with probability at least 1- δ/6 T K, the output of _M of the instance in (<ref>) satisfies:
max_π∈_η^π[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]- ^_M[ ϕ̃h,k(_h, _h)^⊤Mϕ̃h,k(_h, _h) ]
≤ cM_· H d A√(K (d ln (n_ d^1/2)+ln (6 TK|Φ|/δ))/η n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ =·η^-1γ^-2 H^2 d^2K A^2· (d + ln (|Φ|/δ)),
for = (A,d,H,log(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by 2M_γ/5, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within , by <Ref>. The result follows.
§.§ Guarantee for as a Subroutine for
In this subsection, we prove a guarantee for the instantiation of within . Recall that (ϕh,k)_k∈[K] denote the feature maps returned by within (<ref>) at iteration h, and let (Ph,k)_k∈[0 K-1] and ( Ph,k)_k∈[K] be as in <ref>.
Recall that Ph,k-1∈Δ() is the distribution over policies that passes to at outer iteration h∈[H-2] and inner iteration k∈[K] to compute ϕh,k. Thus, by invoking <ref> in <ref> and using that
n_ = ·η^-5 A^2 d^10log (|Φ|/δ)
in <ref> for = (A,d,H,log(|Φ|/δ)) sufficiently large, we immediately obtain the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the class Φ satisfies <ref>. Then, with probability at least 1-δ/3HK, the instance of in <ref> of <ref> runs for t≤'· d iterations for ' = (A,d,H,log(|Φ|/δ)) sufficiently large, and outputs ϕh,k such that for all f∈, there exists w_fh,k∈(3d^3/2) satisfying
_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤' · d^4 n^-1_log
(|Φ|/δ) ≤αη^2/32,
where w_f ∫__h+1 f(y) (y) ν(y) and αη/32 d K A.
We note that by the definition of Ph,k-1 in <ref> of <ref>, <ref> implies that, with probability at least 1-δ/3HK, for all k∈[2 K], f∈ and w_f,w_fh,k∈^d as in <ref>,
1/k-1∑_ℓ=1^k-1_π∼Ph,ℓ^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤2 ' · d^4 n^-1_log
(|Φ|/δ),
We now instantiate <ref> with B=3d^3/2A^1/2, ^2 =2 ' · d^4 n^-1_log
(|Φ|/δ), πℓ = _π∼ Ph,ℓ [π] ∈, for each ℓ∈[k], and
δk=√(∑_a∈(ϕh,k(·,a)^⊤wh,k_f - ϕ_h^⋆(·,a)^⊤w_f)^2),
and make use of the following facts:
* δk_∞≤ 3d^3/2 A^1/2 (since w_f∨w_fh,k≤3 d^3/2 and ϕ_h^⋆(·,·)∨ϕh,k(·,·)≤ 1).
* <ref> sets K = · d^5A/η^2 and n_≥·η^-4A d^10log (|Φ|/δ) with = (A,d,H,log(|Φ|/δ)) sufficiently large.
This leads to the following corollary.
Let δ,η∈(0,1), K≥ 1, and be as in <ref>, and fix h∈[H-2] and k∈[K]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/3H, the outputs (ϕh,k)_k∈[K] of in <ref> at iteration h of <ref> are such that for all f∈, with w_f, w_fh,k∈^d defined as in <ref>,
min_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/128 d.
§.§ Concluding the Proof of thm:voxmain
In this section, we conclude the proof of <ref>. We prove the result as a direct consequence of the following inductive statement.
Consider iteration h∈[H] of (Φ, η, ,δ) (<ref>) with parameters >0,δ, η∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The distributions P1:h+1 at the start of the hth iteration of satisfy (<ref>).
* P1:h+1 are supported on policies that never take the terminal action .
* The input parameter = (A,d,H,log(|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the distribution Ph+2 produced by (Φ,η,,δ) at the end of the hth iteration is an ( η/32 dK A,η)-randomized policy cover relative to _η in for layer h+2, where K is as in <ref>. In addition, Ph+2⊆, and | Ph+2|≤576 d^7/η^4log (1+576 d^4/η^2).
This immediately implies <ref>, which bounds the cardinality of the supports of the distributions returned by <ref>
Follows immediately from <ref>.
In a first step we prove that with probability at least 1-δ, P1,… PH are (η32 dK A, η)-randomized policy covers relative to _η for layers 1 through H in ; that is, we need to show that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>). Now, <ref> implies that P1,… PH are (η64 dK A, )-randomized policy covers relative to for layers 1 through H in the real MDP M, where 4 H d^3/2η. Plugging in the choice of K in <ref> implies the claim on P1,…, PH.
We now bound the number of trajectories <ref> requires. The total number of trajectories is equal to the sum of the number of trajectories , , and require. We know that and are called T = O(γ^-2 d) times by (<ref>) at each inner iteration k∈[K] of <ref> (γ is defined in <ref>), and is called once. Furthermore, each call to requires H · n_ trajectories, and and require n_ and n_ trajectories, respectively. Thus, the total number of trajectories is equal to
n_· H^2 K T+ n_· H K T + n_· H K
≤O(η^-13 d^27 H^4 A^4 (d + ln (|Φ|/δ))) +O(η^-14 d^28 H A ln (1/δ)) +O(η^-7 d^15 A^3 H ln (|Φ|/δ)),
where the inequality follows by the choice of parameters in <ref>.
This implies the desired bound on the number of trajectories
Let _h, _h', and _h” denote the success events in <ref>, <ref>, and <ref>, respectively, and note that by the union bound, we have [_h ∩_h'∩”_h]≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'∩”_h.
Using <ref>, the assumption that P1:h+1 satisfy (<ref>) implies that the distributions P1, …, Ph+1 have the property that for all ℓ∈[h+1], x∈_ℓ,η(_η), then
_π∼ Pℓ*[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π≥α·sup_π∈_η[ℓ](x)^⊤ϕ̅_ℓ-1^⋆,π, for αη/32 dK A.
We will show that with probability at least 1-δ/H, the policy distribution Ph+2 satisfies the same property:
∀ x∈_h+2,η(_η), _π∈ Ph+2*[h+2](x)^⊤ϕ̅_h+1^⋆,π≥α·sup_π∈_η[h+2](x)^⊤ϕ̅_h+1^⋆,π.
By <ref> this is equivalent to the statement that Ph+2 is an ( η/32 dK A,η)-randomized policy cover relative to _η for layer h+2 in .
Throughout the proof, for any ℓ∈[2 H] and z∈_ℓ, we define
π_z ∈_π∈_η^π(z),
and note that by <ref>, we have
π_z ∈_π∈_η[ℓ](z)^⊤ϕ̅_ℓ-1^⋆,π, where ϕ̅_ℓ-1^⋆,π^π[^⋆_ℓ-1(_ℓ-1, _ℓ-1)].
Fix x∈_h+2,η(_η).
In the remainder of the proof, we will argue that Ph+2 satisfies the coverage property <ref> for x.
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)_x^⊤ϕ̅^⋆_h+1(y,π_x(y)), where _x [θ_x^⊤, 0]^⊤ and θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(_η). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y), and w̅_x [w_x^⊤, 0]^⊤∈^d+1.
By definition of π_x, we have that for all y∈_h+1,
_x^⊤ϕ̅^⋆_h+1(y,π_x(y)) = max_a∈_x^⊤ϕ̅^⋆_h+1(y,a),
≤max_a∈_x^⊤ϕ̅^⋆_h+1(y,a), (justified below)
= max_a∈θ_x^⊤ϕ^⋆_h+1(y,a), (since y≠_h+1 and [θ̅_x]_d+1=0)
where (<ref>) follows by the facts that _x^⊤ϕ̅^⋆_h+1(y,)=0 (since ϕ̅^⋆_h+1(·,)≡ e_d+1 and [_x]_d+1=0) and that
∀ a∈, _x^⊤ϕ̅^⋆_h+1(y,a) y≠_h+1=θ_x^⊤ϕ^⋆_h+1(y,a) = [h+2](x)^⊤ϕ_h+1^⋆(y,a)/[h+2](x),
≥ 0. ([h+2](·)^⊤ϕ_h+1^⋆(y,a) is a conditional law)
eq:cravit and the fact that θ_x=1 implies that
f_x|__h+1∈,
where f_x|__h+1 denotes the restriction of f_x to _h+1. We also note that since x∈_h+2,η(_η), we have
_x^⊤ϕ̅_h^⋆, π_x = [ ∫__h+1 f_x(y) (y)^⊤ν(y), 0] ϕ̅_h^⋆, π_x, (by definition of w̅_x in (<ref>))
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_xν(y), (since (y)=[(y)^⊤, 0], for all y≠_h+1)
= ∫__h+1 f_x(y) (y)^⊤ϕ̅_h^⋆, π_x(y), (since f_x(_h+1)=0)
=_x^⊤ϕ̅_h+1^⋆,π_x, (by definition of f_x in (<ref>))
= 1/*[h+2](x)max_π∈_η[h+2](x)^⊤ϕ̃_h+1^⋆,π, (by definition of θ̅_x in (<ref>))
≥η>0,
where (<ref>) uses the definition of reachable states _h+2,η(_η) (see <ref>); we recall (see <ref>) that ϕ̃^⋆,π_h^π[ϕ̃^⋆_h(_h, _h)] and ϕ̃^⋆_h represents the restriction of ϕ̅^⋆_h to its first d coordinates.
Applying the guarantee for
Moving forward, we let (ϕh,k)_k∈[K] be the feature maps returned by within (<ref>) at iteration h, and define ϕ̅^k,π_h^π[h,k(_h,_h)], for any π∈, where we recall that h,k is the extension of ϕh,k to ; see <ref>. Further, for k∈[K], let wh,k_x be the vector wh,k_f in <ref> with f=f_x|__h+1, and note that
w_xh,k≤3d^3/2.
We will use the extended vector w̅_xh,k [(w_xh,k)^⊤,0]^⊤∈^d+1. By Jensen's inequality, we have for all k∈[K],
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤^π_x[(h,k(_h,_h)^⊤h,k_x - ϕ̅_h^⋆(_h,_h)^⊤_x)^2],
= ^π_x[(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
= ^π_x[𝕀{_h ∈_h,η(_η)}·(h,k(_h,π_x(_h))^⊤h,k_x - ϕ̅_h^⋆(_h,π_x(_h))^⊤_x)^2],
≤^π_x[𝕀{_h ∈_h,η(_η)}·∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
where the last inequality follows by the fact that h,k(·,)≡ϕ̅^⋆_h(·,) ≡ e_d+1 and [w̅_xh,k]_d+1=[w̅_x]_d+1=0 (by definition). Thus, for g(y) 𝕀{y∈_h,η(_η)}·∑_a∈(ϕ̅h,k(y,a)^⊤_xh,k - ϕ̅_h^⋆(y,a)^⊤_x )^2, (<ref>) implies that
( h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x)^2
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_x_h-1(y),
≤∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_y_h-1(y), (by definition of π_y ((<ref>)) and (<ref>)))
≤α^-1_π∼ Ph[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (by (<ref>) with ℓ=h, and g(y)=0 for all y∉_h,η(_η))
≤ 2 α^-1_π∼Ph,k-1[ ∫__h g(y)[h](y)^⊤ϕ̅^⋆,π_h-1(y)], (Ph,k-1 as in <ref> of <ref>)
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(h,k(_h,a)^⊤h,k_x - ϕ̅_h^⋆(_h,a)^⊤_x)^2],
= 2 α^-1_π∼Ph,k-1^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2],
where (<ref>) follows by the fact that the policies in the support of Ph,k-1 never take the terminal action (by assumption) and that h,k(x,a)^⊤h,k_x - ϕ̅_h^⋆(x,a)^⊤_x=ϕh,k(x,a)^⊤wh,k_x - ϕ_h^⋆(x,a)^⊤w_x for all a∈ whenever x≠_h. We note that Ph,k-1 is the distribution over policies that passes to to compute ϕh,k. Thus, since w_x = ∫__h+1 f_x(y) (y) ν(y) (see (<ref>)) and f_x|__h+1∈ (see (<ref>)), the guarantee for in <ref> together with (<ref>), implies that (recall that we condition on the event )
∀ k∈[K], | h,k_x_h^k,π_x- _xϕ̅_h^⋆, π_x| ≤η/4,
Since _xϕ̅_h^⋆, π_x≥η (see (<ref>)), (<ref>) implies that under , we have
∀ k∈[K], _xϕ̅_h^⋆, π_x≤4/3h,k_x_h^k,π_x.
Applying the guarantee for
To proceed, define
ℓ∈_k∈[K]_π∼ Ph,k^π[∑_a∈(ϕh,k(_h,a)^⊤wh,k_x - ϕ_h^⋆(_h,a)^⊤w_x)^2].
Note that by <ref>, we have,
_π∼ Ph,ℓ^π[∑_a∈(ϕh,ℓ(_h,a)^⊤wh,ℓ_x - ϕ_h^⋆(_h,a)^⊤w_x)^2] ≤η^2/128 d.
Let γ be as in <ref>, and for each k∈[K] define
Mh,kγ I + _π∼ Ph,k^π[ϕh,k(_h,_h)ϕh,k(_h,_h)^⊤], and Mh,k[ Mh,k 0_d × 1; 0_1 × d 0 ]∈^(d+1)× (d+1).
From (<ref>), Hölder's inequality, and AM-GM, we have
_xϕ̅_h^⋆, π_x ≤4/3*w̅h,ℓ_x _ Mh,ℓ·^ℓ, π_x_h_( Mh,ℓ)^, (( Mh,k)^ denotes the pseudo-inverse of Mh,k)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^ℓ, π_x_h^2_( Mh,ℓ)^,
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[h,k(_h,_h)^2_( Mh,ℓ)^], (Jensen's inequality)
≤8d/η*w̅h,ℓ_x^2_ Mh,ℓ + η/12 d^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1].
By <ref> (in particular (<ref>)), we have that under the event _h”,
^π_x[ϕ̃h,k(_h,_h)^2_( Mh,ℓ)^-1] ≤ 3 d.
Combining this with (<ref>), it follows that
_xϕ̅_h^⋆, π_x ≤η/4 + 8d/η*w̅h,ℓ_x ^2_ Mh,ℓ ,
= η/4 + 8d/η·*wh,ℓ_x^2_ Mh,ℓ,
=η/4+ 8dγ/η·*wh,ℓ_x^2 + 8d/η·_π∼ Ph,ℓ^π[ ( ϕh,ℓ(_h,_h)^⊤wh,ℓ_x)^2 ],
≤η/4+ 72 d^4γ/η + 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ]+ η/8, (see below)
≤η/2+ 16 d/η·_π∼ Ph,ℓ^π[ ( ϕ^⋆_h(_h,_h)^⊤w_x)^2 ],
where (<ref>) follows by (<ref>), (<ref>), and that (a+b)^2 ≤ 2a^2 +2b^2. The last inequality follows by the parameter choice γ = η^2/576 d^4 (see <ref>).
Concluding
By the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1.
Plugging this into (<ref>), we have
_xϕ̅_h^⋆, π_x - η/2
≤16 d/η·_π∼ Ph,ℓ^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] ,
≤16 d A/η·_π∼ Ph,ℓ^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)] , (see below)
= 16 d A/η·_π∼ Ph,ℓ^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
≤16 d A K/η·_π∼ Ph+2^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= 16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and (<ref>) follows by definition of Ph+2 in <ref>.
Combining (<ref>) with the fact that _xϕ̅_h^⋆, π_x≥η (see (<ref>)) yields
1/2·μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_x_h+1 ≤16 d A K/η·_π∼ Ph+2[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆,π_h+1],
= 16 d A K/η·_π∼ Ph+2[ μ̅_h+2^⋆(x)^⊤/μ̅_h+2^⋆(x)ϕ̅^⋆,π_h+1],
where the last equality follows by the fact that policies in the support of Ph+2 never take the terminal action. This establishes (<ref>). Since this argument holds uniformly for all x∈_h+2,η(_η), the proof is completed. The bound on | Ph+2| follows immediately from <ref> and the choice of γ in <ref>.
§.§ Proof of <ref>
Let h∈ [H] and P∈Δ() be a (C,γ)-generalized optimal design (see <ref>) for the set
_h{^π[
(_h, _h)(_h, _h) ^]|π∈}.
Further, define P'=∑_π∈(P)P(π)·_π∘_h+1 and
M_PγI_d+_π∼P^π*(_h, _h)(_h, _h) ^.
We will show that P' is a (α,η)-randomized policy cover for layer h+2 with αη/2 d A and η 4 d √((1+C)γ).
Let x∈_h+2,η() and π_x ∈_π∈ d^π(x).
Preliminaries
We begin with some notation. Let us introduce a function f_x: _h+1→ defined by
f_x(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Note that [h+2](x)>0, since x∈_h+2,η(). Next, we define
w_x ∫__h+1 f_x(y) (y) ν(y) ∈^d.
Since f_x takes values in [-1,1] (because ϕ_h+1^⋆(· , ·)≤ 1 and θ_x≤ 1), the normalizing assumption on μ^⋆_h+1 in (<ref>) implies that
w_x ∈(2√(d)).
We also note that the definitions of f_x and w_x imply that
w_x^⊤ϕ_h^⋆, π_x = θ_x^⊤ϕ_h+1^⋆,π_x = sup_π∈θ_x^⊤ϕ_h+1^⋆,π, (by definition of π_x)
= 1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π, (by definition of θ_x in (<ref>))
≥η>0,
where the penultimate inequality follows by the fact that x∈_h+2,η().
Using the generalized optimal design property
By Hölder's inequality, we have for any ν>0,
w_x^⊤ϕ_h^⋆,π_x ≤w_x_M_P·ϕ^⋆, π_x_h_M_P^-1,
≤1/2νw_x^2_M_P + ν/2ϕ^⋆, π_x_h^2_M_P^-1, (AM-GM)
≤1/2νw_x^2_M_P + ν/2^π_x[ ϕ^⋆_h(_h, _h)^2_M_P^-1], (Jensen's inequality)
= 1/2νw_x^2_M_P + ν/2(M_P^-1^π_x[ ϕ^⋆_h(_h, _h) ϕ^⋆_h(_h, _h)^⊤] ),
≤1/2νw_x^2_M_P + ν· d(1+C)/2, (P is a (C,γ)-generalized optimal design)
= γ/2νw_x^2 + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2, (by definition of M_P)
≤2γ d/ν + 1/2ν_π∼ P^π[(w_x^⊤ϕ^⋆_h(_h,_h))^2] + ν· d(1+C)/2,
where the last inequality follows by the bound on w_x in (<ref>). Now, by the definition of w_x, the fact that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,π_x(y))≥ 0 is a conditional density for all y∈_h+1, and Jensen's inequality, we have:
∀ (y',a')∈_h×, (ϕ^⋆_h(y',a')^⊤ w_x )^2 = (∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y))^2,
≤∫__h+1(μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) )^2 μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
≤∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(y',a') ν(y),
where the last inequality follows by Cauchy-Schwarz and that ϕ^⋆_h+1(·, ·)≤ 1. Plugging (<ref>) into (<ref>) and rearranging, we obtain: for all ν>0,
w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2
≤1/2ν_π∼ P^π[∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,π_x(y)) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)],
≤A/2ν_π∼ P^π[1/A∑_a∈∫__h+1μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(y,a) μ_h+1^⋆(y)^⊤ϕ^⋆_h(_h,_h) ν(y)], (see below)
= A/2ν_π∼ P^π∘_h+1π_[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where (<ref>) uses that μ_h+2^⋆(x)^⊤ϕ^⋆_h+1(y,a) is non-negative for all (y,a)∈_h+1× (since it is a conditional density), and the last inequality follows by definition of P'. Now, using (<ref>), we get: for ν2 √(γ (1+C)^-1),
1/2w_x^⊤ϕ_h^⋆,π_x ≤w_x^⊤ϕ_h^⋆,π_x - η/2,
≤w_x^⊤ϕ_h^⋆,π_x -2 d√((1+C)γ), (using that γ = η^2 d^-2 (1+C)^-1/16)
≤w_x^⊤ϕ_h^⋆,π_x - 2γ d/ν - ν· d(1+C)/2, (by the choice of ν)
≤A/2ν_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ], (by (<ref>))
= A/4 √(γ (1+C)^-1)_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
= Ad/η_π∼ P'^π[ μ_h+2^⋆(x)^⊤/μ_h+2^⋆(x)ϕ^⋆_h+1(_h+1,_h+1) ],
where the last equality uses that γ = η^2 d^-2 (1+C)^-1/16.
Rearranging, implies that P' is an (η/2d A,η) randomized policy cover for layer h+2.
§ GENERIC GUARANTEE FOR
In this section we give a generic guarantee for the (<ref>). We consider the abstract framework introduced in <ref>, in which the aim is to compute a generalized optimal design for an implicitly specified set of matrices
=*W^z_z∈⊆ indexed by an abstract set . We assume that subroutines and used by satisfy the following assumption.
[Approximation guarantee for and ]
Consider an abstract set and a collection of PSD matrices {W^z∈^d× d| z∈} indexed by elements in . There exist _,_>0 and reference subsets _ref, _⊆ such that for any M ∈ and P∈Δ(_), the outputs ẑ_M (M/M_) and W_P (P)∈ satisfy ẑ_M∈_ and
sup_z∈_ref(M W^z) ≤(M W^ẑ_M)+_·M_ , and W_P - _z∼ P[W^z]_≤_.
For our application to RL, the sets _ref and _ are useful to accommodate algorithms that optimize relative to restricted policy sets.
Given such subroutines and , and γ>0, ((·),(·), ·,γ) applies the Frank-Wolfe (conditional gradient method) to approximately solve the optimization problem
_P ∈Δ() F(P), where F(P)-log(γ I_d + _z∼ P[W^z]).
Letting {W^z | z∈} and assuming that ⊆_(1), the main result for this subsection (<ref>) bounds the number of iterations used by ((·),(·), ·,γ) under <ref> and gives a guarantee for the output.
Let C∈(1,2] and γ∈(0,1) be such that γ C<5/2, and suppose that the collection {W^z | z ∈} consists of PSD matrices of Frobenius norm bounded by 1. If (<Ref>) is run with parameters C, γ and , satisfying <ref> with _=Cγ/5 and _=Cγ^2 /10, then the algorithm terminates after t ≤16 γ^-2C^-2 d^-1ln (1 + 1/γ) iterations,[While it may seem odd at first glance that the iteration complexity for scales with d^-1, we note that the non-trivial regime in <ref> is when γ≤ 1/d. This is because for γ≥ 1/d, we have (M_P^-1 W^z)≤ d for any P∈Δ() and z∈, since M_P≽ I_d/d and W^z∈∩_(1). Whenever γ≤1/d, the iteration complexity for increases with d, as expected.] and requires at most twice that many calls to each of and . Furthermore, the output P_t of is such that P_t∈Δ(_),
|supp P_t|≤ t, and
sup_z∈_ref(M_P_t^-1 W^z) ≤ (1+3C/2) · d, where M_P_tγ I_d +_z∼ P_t[ W^z].
Let F be as in (<ref>). For z∈ and P∈Δ(), define M^zγ I_d
+ W^z, W_P_z∼ P[W^z], and M_P γ I_d + W_P. Throughout the proof, we will use that the function f: M ↦ -log M defined over has the following gradient and Hessian expressions:
∇ f(M)[H] = - (M^-1 H) and ∇^2 f(M)[H,H] = (M^-1 H M^-1 H),
for all H∈^d× d.
To begin, by Taylor's theorem and the fact that the set of PSD matrices is convex, there exists λ∈[0,1] such that for any P,P'∈, defining M_λλ M_P + (1-λ) M_P'∈,
F(P') - F(P) = f(M_P') -f(M_P),
= ∇ f(M_P)[M_P'-M_P] + 1/2∇^2 f(M_λ)[M_P'-M_P, M_P'-M_P] ,
= - (M_P^-1 (W_P'- W_P)) + 1/2(M^-1_λ (W_P'-W_P) M^-1_λ(W_P'- W_P)),
≤- (M_P^-1 (W_P'- W_P)) + 1/2γ^2W_P' - W_P^2_,
where the last inequality follows because for all z∈, M^z = γ I_d + W^z≽γ I_d, since W^z∈. We also note that by definition of F in (<ref>) and the fact that ⊂∩_(1), we have
sup_P,P'∈Δ() F(P') - F(P) ≤ dln (1 + 1/γ),
since the determinant of a matrix is bounded by the product of the norms of its columns.
Bounding the number of iterations
If <ref> has not terminated at iteration ℓ≥ 1, then
(M_ℓ^-1W_ℓ)>(1+C)d,
where M_ℓ = γ I_d + (P_ℓ), W_ℓ =
(𝕀_z̃_ℓ), and z̃_ℓ =
(M_ℓ^-1/M_ℓ^-1_F). Since satisfies <ref> with _=
γ^2 C/10, we have that
M_P_ℓ - M_ℓ_∨W^z̃_ℓ - W_ℓ_≤γ^2 C/10.
Furthermore, since M_P_ℓ≽γ I_d (because ⊆), we have using Cauchy-Schwarz
rM_P_ℓ^-1· (M_ℓ - M_P_ℓ)_≤M_P_ℓ^-1_·M_P_ℓ - M_ℓ_≤γ C/10<1/4,
where the last inequality follows by the fact that γ C<5/2.
On the other hand, by <ref>, instantiated with A = M_P_ℓ and E = M_ℓ -M_P_ℓ, we have that
M_P_ℓ^-1 - M_ℓ^-1_≤M_ℓ -M_P_ℓ_/1-r·M_P_ℓ^-1_^2 ≤4/3 γ^2γ^2 C/10 , (by (<ref>), (<ref>), and M_P_ℓ≽γ I_d)
= 2C/15≤C/5.
Note also that since only returns matrices in (see <ref>), we have M_ℓ≽γ I_d, and so
M_ℓ^-1_≤1/γ.
Using (<ref>)-(<ref>) and the triangle inequality, we obtain
(M_P_ℓ^-1 W^z̃_ℓ) = ((M_P_ℓ^-1 -M_ℓ^-1) W^z̃_ℓ) + (M_ℓ^-1 (W^z̃_ℓ-W_ℓ)) + (M_ℓ^-1 W_ℓ),
> - M_P_ℓ^-1 -M_ℓ^-1_·W^z̃_ℓ_ -M_ℓ^-1_·W^z̃_ℓ-W_ℓ_ + (1+C)d, (by (<ref>))
≥ - C/5 - 1/γ·γ C/5+ (1+C)d, (by ⊆_(1) and (<ref>)-(<ref>))
≥ - C/2 + (1+C)d.
Now, recall that μ = Cγ^2 d/8. Instantiating (<ref>) with P'=P_ℓ+1 and P=P_ℓ and using (<ref>), we have
F(P_ℓ+1) ≤ F(P_ℓ) + (M_P_ℓ^-1 (W_P_ℓ- W_P_ℓ+1)) + 2/γ^2W_P_ℓ+1- W_P_ℓ^2_,
= F(P_ℓ) + μ·(M_P_ℓ^-1 (W_P_ℓ- W^z̃_ℓ)) + μ^2/2γ^2W^z̃_ℓ- W_P_ℓ^2_,
< F(P_ℓ) + μ·(C/2 - (1+C)d + (M_P_ℓ^-1 W_P_ℓ) ) + 2 μ^2/γ^2, (by ⊆_(1) and (<ref>))
≤ F(P_ℓ) - μ Cd/2 + 2μ^2/γ^2, (see below)
≤ F(P_ℓ) - γ^2 C^2 d^2/16 ,
where (<ref>) follows by the fact that (M_P_ℓ^-1 W_P_ℓ) ≤ d, and the last inequality follows by the choice of μ in <ref>. If the algorithm runs for t≥ 1 iterations, then summing (<ref>) and telescoping, we have
- (t-1) γ^2 C^2 d^2/16 > F(P_t)- F(P_1) ≥inf_P,P'∈Δ() F(P)-F(P') ≥ -d ln (1+1/γ),
where the last inequality follows by (<ref>). By rearranging, we conclude that
t < 1 + 16 γ^-2C^-2 d^-1ln (1 + 1/γ),
giving the claimed bound on the number of iterations.
Guarantee for the last iterate
Suppose the algorithm terminates at step t. Since and satisfy <ref> with _= C
γ/5, the iterates at step t satisfy (<ref>) in addition to
sup_z∈_(M_t^-1 W^z) ≤(M_t^-1 W^z̃_t) + C γM_t^-1_/5,
≤(M_t^-1 W^z̃_t) + C d^1/2M_t^-1_ /5,
≤(M_t^-1 W^z̃_t) + Cd^1/2 /5,
where the last inequality follows by (<ref>).
Combining this with the termination condition (M_t^-1W_t) ≤
(1+C)d, we have that
sup_z ∈_(M_P_t^-1 W^z)
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z)+ sup_z ∈_(M_t^-1 W^z),
≤sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W^z̃_t) +C d^1/2/5, (by (<ref>))
= sup_z ∈_((M_P_t^-1-M_t^-1) W^z) + (M_t^-1 W_t)+ (M_t^-1 (W^z̃_t -W_t)) +C d^1/2/5,
≤sup_z ∈_M_P_t^-1 -M_t^-1_·W^z_ + (1+C)d+M_t^-1_·W^z̃_t- W_t_ + C d^1/2/5, (see below)
≤2C/15+ (1+C)d+1/γ·C γ^2/10 + C d^1/2/5, (by (<ref>)-(<ref>) and ⊆_(1))
≤ (1+3C/2)· d,
where (<ref>) follows by Cauchy-Schwarz and (M_t^-1W_t) ≤
(1+C)d. This completes the proof.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for (<ref>). Compared to previous guarantees in <cit.>, we prove a fast 1/n-type rate of convergence for , and show that the algorithm succeeds even when the norm of the weight w in <ref> does not grow with the number of iterations. We also use the slightly simpler discriminator class:
{. f x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ}.
The main guarantee for is as follows.
Let h∈ [H], δ∈(0,e^-1), and n∈ℕ be given, and suppose that satisfies the normalization assumption in <ref>.
For any function f ∈, define
w_f = ∫__h+1 f(x) _h+1(x) ν(x).
Let P∈Δ() be a distribution over policies, be as (<ref>), and
Φ be a feature class satisfying <ref>. With probability at least 1 - δ, with input (h, , Φ, P, n) terminates after t≤ T*d log_3/2 (2n d^-1/2) iterations, and its output ϕt satisfies
sup_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤_^2(n,δ),
where _^2(n,δ) c T d^3 n^-1log
(|Φ|/δ), for some sufficiently large absolute constant c>0.
To prove the theorem, we need a technical lemma, which follows from <cit.>.
Consider a call to (h, , Φ, P, n) (<ref>) in the setting of <ref>. Further, let _ be as in <ref> and define
(ϕt, wt_1,…, wt_t-1)∈_ϕ∈Φ,(w_1,…,w_t-1)∈(2√(d))^t-1∑_ℓ=1^t-1_(ϕ,w_ℓ,fℓ).
For any δ∈(0,1), there is an event t(δ) of probability at least 1-δ such that under t(δ), if <ref> does not terminate at iteration t≥ 1, then for wℓ w_fℓ:
∑_ℓ =1^t-1_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ - ϕ_h^⋆(_h,_h)^⊤ wℓ)^2] ≤ t _^2(n,δ),
inf_w ∈3/2(d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2] > 8 d t_^2(n,δ),
where ^2_(n,δ) c d^2 n^-1ln
(|Φ|/δ) and c≥1 is a sufficiently large absolute constant.
With this, we prove <ref>.
Let us abbreviate _(n,δ),
with _(n,δ) defined as in <ref>. Further, let N 1+ *d log_3/2 (2d^3/2/), δ' δ/2N, and define
__(n,δ').
Note that ≤_ and N -1 ≤ T, where T is the number of iterations in the theorem statement; the latter inequality follows by the facts that the absolute constant c in <ref> is at least 1 and ln (|Φ|/δ)≥1. We define an event 1(δ')∩…∩N(δ'), where (^t(·))_t are the success events in <ref>. Note that []≥ 1 - δ/2 by the union bound. Throughout this proof, we condition on the event .
To begin the proof, we define a sequence of vectors (v_1:dℓ)_ℓ≥ 0 in an inductive
fashion, with v_iℓ∈^d for all
i∈d and ℓ≥0. For ℓ=0, we let
v_i0 = e_i/d, for all i∈[d]. For
ℓ≥ 1, we consider two cases:
* Case I: If
ℓ{j ∈[d] | |(V_-jℓ-1, wℓ)|>(1+C)· |(Vℓ-1)| . }≠∅,
where
Vℓ-1 (v_1ℓ-1,…,
v_dℓ-1)∈^d× d and
wℓw_fℓ, then we let
j_j'∈ℓj' and define
v_iℓ{[ wℓ , if i=j,; v_iℓ-1, otherwise. ].
* Case II: If ℓ=∅, we let
v_iℓ = v_iℓ-1, for all i∈[d].
We first show that t≠∅ at any iteration t∈[N] where does not terminate. Let t∈[N] be an iteration where the algorithm does not terminate, and suppose that t=∅. This means that
∀ j∈[d] , |(V_-jt-1, wt)|≤ (1+C)· |(Vt-1)|.
Now, since (Vt-1)≠ 0 (note that
*(Vt) is non-decreasing with t), we have
that span( Vt-1)= ^d. Thus, there exist
β_1,…, β_d∈ be such that wt=
∑_i=1^d β_i vt-1_i. By the linearity of the
determinant and (<ref>), we have
∀ j ∈[d], (1+C)|·(Vt-1)| ≥ |(V_-jt-1, wt)|,
= |(V_-jt-1, ∑_i=1^d β_i vt-1_i )|,
= *∑_i∈[d]β_i·(V_-jt-1, v_it-1),
= |β_j| · |(Vt-1)|.
This implies that |β_j|≤ (1+C) for all
j∈[d]. Now, note that by the definition of (v_it-1), we have that for any i∈[d] such that v_it-1≠ e_i/d, there exists ℓ∈ [t-1] such that wℓ= v_it-1. Let
t{i∈[d]| v_it-1≠ e_i/d},
and for any i∈t, let ℓ_i∈[t-1] be such that wℓ_i= v_it-1. Further, define
wt∑_i∈tβ_i wℓ_i= ∑_i∈tβ_i v_it-1,
and note that by the triangle inequality and the fact that wt=∑_i=1^d β_i v_it-1, we have
wt- wt≤ (1+C)_.
Finally, with the notation in (<ref>), define
wt_t ∑_i∈tβ_i wt_ℓ_i,and note that wt_t ∈ (1+C) (2d^3/2),
since |β_i| ≤ (1+C) for all i∈[d], |t|≤ d, and wt_ℓ∈(2√(d)), for all ℓ∈[t-1]. Now, by <ref>, in particular (<ref>), we have
∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ≤ t _^2,
where _ is as in (<ref>). Using the
expressions in <ref> with (<ref>) and Jensen's inequality, we have that under t,
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2]
≤(∑_j∈t |β_j|) ·∑_i∈t_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_ℓ_i - ϕ_h^⋆(_h,_h)^⊤ wℓ_i)^2] ,
≤ (1+C) d t _^2.
Now, using (<ref>) and the facts that (a+b)^2 ≤ 2a^2 + 2 b^2 and ϕ^⋆_h_2≤ 1, we have that
_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ wt_t - ϕ_h^⋆(_h,_h)^⊤ wt)^2] ≤ 2(1+C)^2 ^2 + 2(1+C)dt _^2,
≤ 2(1+C)^2 ^2_ + 2(1+C)dt _^2.
Using that C=1/2, we conclude that the right-hand side of this inequality is bounded by 8 d t_^2 which is a contradiction, since wt_t ∈ (1+C)(2d^3/2) = (3d^3/2) and by <ref>, we must have
inf_w∈(3d^3/2)_π∼ P^π∘_h π_[( ϕt(_h,_h)^⊤ w- ϕ_h^⋆(_h,_h)^⊤ wt)^2]> 8 t _
if does not terminate at round t.
Therefore, we have that t≠∅, for any
iteration t∈[2 N] where does not
terminate.
We now bound the iteration count and prove that the guarantee in
<ref> holds at termination. Note that whenever ℓ≠∅ for ℓ>1, we have by construction:
|(Vℓ)| > 3/2 · |(Vℓ-1)|.
Thus, if runs for t∈[2 N] iterations, then
|(Vt)| > (3/2)^t-1· |(V1)|.
On the other hand, since the determinant of a matrix is bounded by the product of the norms of its columns and v_1:dt∈(2√(d)), we have
|(Vt)| ≤ 2^d d^d/2.
Note also that |(V0)| = (/d)^d. Plugging this
into (<ref>), we conclude that
(3/2)^t-1 < (2d^3/2/)^d.
Taking the logarithm on both sides and rearranging yields
t < 1+ d log_3/2 (2d^3/2/)≤ N.
Thus, the algorithm must terminate after at most N-1 iterations. Furthermore, by <cit.>, we have that with probability at least 1-δ/2N, if the algorithm terminates at iteration t, then
max_f∈inf_w ∈(3d^3/2)_π∼ P^π∘_h π_[(w^⊤ϕt(_h,_h)- w_f^⊤ϕ_h^⋆(_h,_h) )^2] ≤ 32 t _^2,
≤ 32 (N-1)_^2,
≤ 32 T _^2.
Applying a
union bound completes the proof.
§ GENERIC GUARANTEES FOR
In this section, we present self-contained guarantees for (<ref>). We show that given any reward functions r_1:h:×→_≥ 0 and function classes _1:h, where _t⊆{g: _t×→} for t∈[h], that “realize” these reward functions (we formalize this in the next definition), that if P1:h are (approximate) policy covers for layers 1 through h, then for sufficiently large n≥ 1 and with high probability, the output = (h,r_1:h, _1:h, P1:h, n) is an approximate maximizer of the objective
max_π∈^π[∑_t=1^h r_t(_t,_t)].
To formalize this result, we define the notion of realizability we require for the function classes _1:h.
We say that function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize reward functions r_1:h:×→ if for all t∈[h] and all π∈^t+1:h,
Q_t^π∈_t, where Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that Q^π_t in (<ref>) represents the state-action value function (Q-function) at layer t∈[h] with respect to the rewards r_1:h and partial policy π.
In what follows, given a function class ⊆{g: ×→}, we use _() to denote the -covering number of in ℓ_∞ distance.
A set of functions {g_1, …, g_N}⊂{g: ×→} is an -cover of ⊆{g:×→} in ℓ_∞-distance if for all g∈, there exists i ∈ [N] such that
g - g_i_∞≤.
The -covering number _() is the size N of the smallest -cover of .
§.§ Intermediate Results for
To prove our main guarantees for (stated in the next subsection), we first two intermediate lemmas. The first shows that for any poly π, the corresponding Q-function is the Bayes-optimal predictor for the regression problem solved in when π is executed.
Let reward functions r_1:h:×→, P∈Δ(), and ∈^t+1:h be given. Fix t∈h, and let g^P,_ denote the Bayes-optimal predictor[Observe that because this loss is strongly convex with respect to the prediction, the Bayes-optimal predictor is unique up to sets of measure zero.] for the sum of rewards under a policy π sampled from P and composed with via π∘_t∘_t+1; that is,
g^P,_∈_ g : _t ×→_π∼ P^π∘_t π_∘_t+1[( g(_t, _t) - ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) )^2].
Then, g^P,_(·,·)≡ Q^_t(·,·), where Q^_t is the Q-function defined in (<ref>) for the partial policy ∈^t+1,h and rewards r_1:h.
The least-squares solution g^P,_ of the problem in (<ref>) satisfies, for all a∈ and x∈_t,
g^P,_ (x,a) = _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t^h r_ℓ(_ℓ,_ℓ) | _t =x ,_t =a ],
= [ r_t(_t,_t)|_t = x,_t = a]+ _π∼ P^π∘_t π_∘_t+1[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a],
= r_t(x,a) +^[ . ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) | _t = x, _t =a], (see below)
= Q_t^(x,a),
where (<ref>) follows by the fact that conditioned on (_t,_t)=(x,a), the sum of rewards ∑_ℓ=t+1^h r_ℓ(_ℓ,_ℓ) depend only on and not on the policy used to roll-in to layer t.
The next lemma shows that the solution t to the least-squares problem in (<ref>) of <ref> is close to the Q-function in the appropriate sense.
Let δ∈(0,1), B>0, n≥ 1, and h ∈[H] be fixed. Further, let (_, r_1:h, _1:h, P1:h) be such that
* _(n,δ)^2 = cB^2A/n (max_t∈[h]ln__t(1/n)+ln (n/δ)), where c>0 is a sufficiently large absolute constant.
* The function classes _1:h realize the reward functions r_1:h: ×→ (in the sense of <Ref>).
* The functions in _1:h are bounded in absolute value by B uniformly.
* P1,…,Ph∈Δ().
Then, for t∈[h], the solution t to the least-squares problem in (<ref>) in <ref> when invoked as (h, r_1:h, _1:h, P1:h, n) satisfies with probability at least 1-δ,
_π∼ Pt^π[ max_a∈( t(_t,a) - Q_t^t+1(_t, a) )^2 ]≤^2_(n,δ),
where t+1∈^t+1:h is defined as in <ref>.
Fix t∈[h] and abbreviate
gt_ g^Pt,t+1_,
where g^Pt,t+1 is defined as in <ref> (with P= Pt, = t+1, and reward functions r_1:h as in the lemma statement). By <ref>, gt_ is the Bayes-optimal solution to the least-squares problem in (<ref>) of <ref>. Thus, since _1:h realize the reward functions r_1:h, a standard uniform-convergence guarantee for least-square regression (see e.g. <cit.> with = 0 almost surely) implies that there exists an absolute constant c>0 (independent of t,h, and any other problem parameters) such that with probability at least 1-δ,
_π∼ Pt^π∘_tπ_∘_t+1t+1[ ( t(_t,_t) - gt_(_t,_t) )^2 ]≤ c· B^2 ·ln__t(1/n)+ln (n/δ)/n.
Since actions at layer t are taken uniformly at random, (<ref>) implies that
_π∼ Pt^π∘_tπ_∘_t+1t+1[ max_a∈( t(_t,a) - gt_(_t,a) )^2 ]≤ c· B^2A ·ln__t(1/n)+ln (n/δ)/n.
The desired result follows by observing that:
* For all (x,a)∈_t×, gt_(x,a)=Q^t+1_t(x,a), by <ref>.
* The term max_a∈( t(_t,a) - gt_(_t,a) )^2 in (<ref>) does not depend on the actions _t:h, and so the expectation _π∼ Pt^π∘_tπ_∘_t+1t+1· can be simplified to _π∼ Pt^π·.
§.§ Main Guarantee for With Non-Negative Rewards
We now state and prove the main guarantee for used within <ref>, which is stated with respect to the extended MDP defined in <ref>. This result requires non-negative rewards. For the rest of this section, we make use of the extended MDP notation and definitions introduced in <ref>. In addition, given non-negative reward functions r_1:h×→_≥ 0, we define their extensions r̅_1:h in as
r̅_t(x,a){[ r_t(x,a), (x,a)∈_t×; 0, if x= or a=. ].
With this, we now state the guarantee of .
Let α, δ,η∈(0,1), B>0, and h∈[H] be given. Consider reward functions r_1:h: ×→_≥ 0, function classes _1:h, policy distribution P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref> with respect to the true MDP), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,η)-randomized policy cover relative to _η for layer t in (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> (when applied to the true MDP), satisfies the following guarantee for r̅_1:h as in (<ref>):
max_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] ≤^[∑_t=1^hr̅_t(_t,_t)] + _(n,δ),
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define extensions of Q-functions to the extended MDP using the extended rewards r̅_1:h in (<ref>); for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t in the extended MDP with respect to the extended rewards r̅_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r̅_t(x,a)+^π[.∑_ℓ=t+1^hr̅_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
Note that for any partial policy π∈^t+1:h that never takes the terminal action, we have
Q^π_t(x,a)= {[ Q^π_t(x,a)≥ 0, if (x,a)∈_t ×,; 0 , if x = or a = , ].
where the fact that Q^π_t(·,·)≥ 0 follows because the rewards are non-negative. Further, for the function ĝt in <ref>, we define its (clipped) extension
g̅t(x,a){[ max(0,ĝt(x,a)), if (x,a)∈_t ×,; 0 , if x = or a = . ].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH),
where π_⋆∈_π∈_η^π[∑_t=1^hr̅_t(_t,_t)] is the optimal policy with respect to the truncated policy set _η (definition in <ref>) and Q^π_t is the Q-function defined in (<ref>). Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref>) and the union bound to obtain the desired result.
Let π_⋆∈_π∈_η^π[∑_ℓ=1^h r̅_ℓ(_ℓ,_ℓ)]. Observe that the following properties hold:
* For all x∉_t,η(_η), π_⋆(x)= (by definition of _η); and
* For all policies π∈^t+1:h that never take the terminal action, Q^π_t(·,)≡ 0 ≤min_a∈, y∈_tQ^π_t(y,a) (see (<ref>)),
As a result, we have that for any t∈[h] and _t,η_t,η(_η),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤^π_⋆[ 𝕀{_t ∈_t,η}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,η}·(Q^t+1_t(_t,π_⋆(_t))-g̅t(_t,π_⋆(_t)) + g̅t(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the facts that:
* t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>).
* g̅t(·, )≡ 0 ≤g̅t(·, a), for all a∈, by definition of g̅t in (<ref>).
Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
≤ 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ],
= 2 ·^π_⋆[𝕀{_t ∈_t,η}·max_a∈| Q^t+1_t(_t,a)-g̅t(_t,a)| ], (since Q^t+1_t(·,)≡g̅t(·,)≡ 0)
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,η}·max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^(x) ν̅(x)),
≤ 2 √(α^-1∫__t𝕀{x ∈_t,η}·max_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 _π∼ Pt[^π(x)] ν̅(x)), (justified below)
≤ 2 √(α^-1_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-g̅t(x,a))^2 ^π(x) ν̅(x)]), (Fubini's theorem)
= 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-g̅t(_t,a))^2 ]),
= 2√(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-max(0,t(_t,a)))^2 ]),
≤ 2 √(α^-1·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,η)-cover relative to _η for layer t in and π_⋆∈_η, and (<ref>) follows because:
* The policies in the support of Pt never take the terminal action; and
* | Q^t+1_t(x',a')-t(x',a')| = | Q^t+1_t(x',a')-max(0,g̅t(x',a'))|, ∀ (x',a')∈_t× (see (<ref>) and (<ref>)).
Finally, (<ref>) follows by the fact that the Q-functions are non-negative (since the rewards are non-negative), and so replacing max(0,ĝt(_t,a)) by ĝt(_t,a) on the right-hand side of (<ref>) only increases the value of the latter.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^hr̅_t(_t,_t)] - ^[∑_t=1^hr̅_t(_t,_t)] ≤ 2H α^-1/2_(n,δH).
The desired result follows from the union bound, which gives []≥ 1-δ.
Let π,∈ be policies, and assume that π never takes the terminal action. Let Q_t^π be defined as in (<ref>). Then for any h≥ 1,
^[ ∑_t = 1^h r̅_t(_t, _t) ] - ^π[ ∑_t = 1^h r̅_t(_t, _t) ] = ∑_t= 1^h ^[Q_t^π(_t, (_t)) - Q_t^π(_t, π(_t)) ].
§.§ Main Guarantee for With Signed Rewards
We now state and prove a guarantee for in the true MDP , when invoked with signed rewards. We make use of the following lemma, which bounds the total probability mass for the set of states that are not reachable with sufficiently high probability.
For any t∈[H], it holds that
sup_π∈^π[_t ∈_t ∖_t,η()] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(), we have that
∀ x∈_t ∖_t,η(), sup_π∈ d^π(x) ≤η·μ^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(), we obtain
sup_π∈^π[_t ∈_t ∖_t,η()] = sup_π∈∫__t ∖_t,η() d^π(x) ν(x),
= η·∫__t ∖_t,η()μ^⋆_t(x)ν(x), (by (<ref>))
≤η·∫__tμ^⋆_t(x)ν(x),
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
With this, we now state the guarantee of .
Let α, δ,∈(0,1), B,B_1:h>0, and h∈[H] be given. Consider reward functions r_1: _1×→ [-B_1,B_1],…,r_h: _h×→ [-B_h,B_h], function classes _1:h, distributions over policies P1:h, and a parameter n≥ 1 satisfying the following properties:
* The function classes _1:h, where _t⊆{g: _t×→} for t∈[h], realize the reward functions r_1:h (in the sense of <Ref>), and all functions in _1:h have range uniformly bounded by B.
* For each 1 ≤ t ≤ h, it holds that Pt is a (α,)-randomized policy cover for layer t (see <ref>).
Then, with probability at least 1 - δ, the policy = (h, r_1:h, _1:h, P1:h, n) produced by <ref> satisfies the following guarantee:
max_π∈^π[∑_t=1^hr_t(_t,_t)] ≤^[∑_t=1^hr_t(_t,_t)] + _(n,δ) + 2 h d^3/2·∑_t=1^h B_t,
where _(n,δ) c·H √(α^-1 B^2 A n^-1· (max_t∈[h]ln__t(1/n)+ln (n/δ))) and c>0 is an absolute constant.
First, we define the Q-functions for the reward r_1:h; for all t∈[h] and all π∈^t+1:h, define the Q-function at layer t with respect to the rewards r_1:h and partial policy π:
∀ (x,a)∈_t ×, Q^π_t(x,a) r_t(x,a)+^π[.∑_ℓ=t+1^hr_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
To begin, we will show that for any t∈[h] and _(·,·) as in <ref>, there is an event _t of probability at least 1- δ/H under which the learned partial policies t,t+1 are such that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] ≤ 2 α^-1/2_(n,δH) + 2 d^3/2·∑_ℓ=1^h B_ℓ,
where π_⋆∈_π∈^π[∑_t=1^h r_t(_t,_t)] is the optimal policy. Once we establish (<ref>) for all t∈[h], we will apply the performance difference lemma (<ref> instantiated in the true MDP) and the union bound to obtain the desired result.
Let π_⋆∈_π∈^π[∑_ℓ=1^h r_ℓ(_ℓ,_ℓ)]. We have that for any t∈[h] and _t,_t,(),
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))]
= ^π_⋆[𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ]
+ ^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ].
We now bound the last term in (<ref>). Note that by the range assumption on the rewards r_1:h and the definition of the Q-function, we have Q^π_t(x,a)∈ [-∑_ℓ=t^h B_ℓ, ∑_ℓ=t^h B_ℓ], for all π∈^t+1:h. Thus, we have
^π_⋆[𝕀{_t ∈_t ∖_t,}·( Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))) ] ≤ 2^π_⋆[_t ∈_t ∖_t,] ·∑_ℓ=t^h B_ℓ,
≤2 · d^3/2·∑_ℓ=1^h B_ℓ,
where the last inequality follows by <ref>.
Plugging (<ref>) into (<ref>) and using that B_1:h≥ 0 implies that
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤^π_⋆[ 𝕀{_t ∈_t,}·( Q^t+1_t(_t,π_⋆(_t)) - Q^t+1_t(_t, t(_t))) ],
=
^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t)))],
≤^π_⋆[ 𝕀{_t ∈_t,}·(Q^t+1_t(_t,π_⋆(_t))-ĝt(_t,π_⋆(_t)) + ĝt(_t,t(_t))- Q^t+1_t(_t, t(_t)))],
where the last inequality follows by the fact that t(x)∈_a∈t(x,a), for all x∈_t, by the definition of t in (<ref>). Continuing from the previous display, we have
^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))] - 2 d^3/2·∑_ℓ=1^h B_ℓ
≤ 2 ·^π_⋆[𝕀{_t ∈_t,}·max_a∈| Q^t+1_t(_t,a)-ĝt(_t,a)| ],
≤ 2 ·√(^π_⋆[𝕀{_t ∈_t,}·max_a∈( Q^t+1_t(_t,a)-ĝt(_t,a))^2 ]), (Jensen's inequality)
= 2 √(∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^(x) ν(x)),
≤ 2 √(1/α∫__t𝕀{x ∈_t,}·max_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 _π∼ Pt[d^π(x)] ν(x)), (justified below)
≤ 2 √(1/α_π∼ Pt[ ∫__tmax_a∈( Q^t+1_t(x,a)-ĝt(x,a))^2 d^π(x) ν(x)]), (Fubini's theorem)
= 2√(1/α·_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]),
where (<ref>) follows from the fact that Pt is an (α,)-randomized policy cover for layer t.
Now, from <ref> and the fact that _1:h realize r_1:h, we have that for any t∈[h], there is an absolute constant c>0 (independent of t and other problem parameters) and an event _t of probability at least 1-δ/H under which the solution t to the least-squares regression problem on (<ref>) of <ref> satisfies
_π∼ Pt^π[max_a∈( Q^t+1_t(_t,a)-t(_t,a))^2 ]≤_(n,δH)^2,
where _(·,·)^2 is defined as in <ref>. Combining (<ref>) with (<ref>) establishes (<ref>) under the event _t.
To conclude the proof, we note that by the performance difference lemma (<ref>), we have
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)]
= ∑_t=1^h ^π_⋆[Q^t+1_t(_t,π_⋆(_t))- Q^t+1_t(_t, t(_t))].
Thus, under the event ⋃_t=1^h_t, we have that
^[∑_t=1^h r_t(_t,_t)] - ^[∑_t=1^h r_t(_t,_t)] ≤ 2 H α^-1/2_(n,δH) +2 hd^3/2·∑_t=1^h B_t.
The desired result follows from the union bound, which gives []≥ 1-δ.
§ APPLICATION TO REWARD-BASED RL
In this section, we show how the output P1:H of (<ref>), which is a (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (see <ref>), can be used to optimize downstream reward functions r_1:H; our treatment also applies to (for Ph(Ψh) for all h∈[H]). Since the output of is a randomized policy cover, one way to optimize the sum of rewards S_H ∑_h=1^H r_h is by first generating trajectories using policies in P1:H, then applying an offline RL algorithm, e.g. Fitted Q-Iteration () <cit.>, to optimize S_H. It is also possible to use with the randomized policy cover P1:H to achieve the same goal. We will showcase the latter approach, since we can make use of the guarantees for given in <ref>.
As in <ref>, we assume access to a function class _1:H, where _h ⊆{g: _h×→} for each h∈[H], that realize the rewards r_1:H in the following sense: for all h∈[H] and all π∈^h+1:H,
Q_h^π∈_h, where Q^π_h(x,a) r_h(x,a)+^π[.∑_t=h+1^H r_t(_t,_t) | _h=x,_h=a].
Note that when the reward functions r_1:H are linear in the feature map ; that is, when for all h∈[H] and (x,a)∈_h×,
r_h(x,a)=θ_h^⊤(x,a)
for some θ_h∈(1) (this is a common assumption in the context of RL in Low-Rank MDPs <cit.>), then the function classes _1:H, where
∀ h∈[H], _h = {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ , w ∈(2H√(d))},
realize r_1:H. We show this claim next.
Under <ref>, the function classes _1:H in (<ref>) realize the reward functions in (<ref>). Furthermore, the functions in _1:H are uniformly bounded by 2√(d)H, and ln__h()≤ln |Φ|+ d ln (2√(d)H /), for all h∈[H], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
For h=H, we clearly have that for any π∈^H:H, Q^π_H(·,·)=r_H(·,·)∈_H. For h<H and π∈^h+1:H, we have, by the low-rank MDP structure and the expression of the rewards in (<ref>), that
Q^π_h(x,a) =r_h(_h,_h)+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·ϕ^⋆_h(x,a)^⊤μ_h+1^⋆(y) ν (y),
= ϕ^⋆_h(x,a)^⊤( θ_h + ∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y)).
Now, by the fact that ^π[∑_t=h+1^H r_t(_t,_t)|_h+1=y,_h+1=π(y)] ∈ [-H-h,H-h], for all y∈_h+1 (since the rewards take values between -1 and 1 thanks to ϕ(·,·),θ_h∈(1), for all h∈[H]), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_h+1→0,1, *∫__h+1[h+1](y)g(y) ν(y)≤√(d)), we have that
w_h θ_h+∫__h+1^π[.∑_t=h+1^H r_t(_t,_t) | _h+1=y,_h+1=π(y)] ·μ_h+1^⋆(y) ν (y) ∈(2H√(d)).
This, together with (<ref>) and the fact that [h]∈Φ (by <ref>), implies that that Q_h^π∈_h. The bound on the covering number __h(), follows from a standard bound on the covering number of the ball (2H√(d)) <cit.>.
Combining <Ref> with <Ref> results in the following guarantee for .
Let α,,δ∈(0,1) be given and fix h∈[H]. Let be the output of when given input (H, r_1:H, _1:H, P1:H, n), where
* The reward functions r_1:H are as in (<ref>), with θ_1:H∈(1)
* The function classes _1:H are as in (<ref>).
* For each 1≤ h≤ H, it holds that Ph is a (α,)-randomized policy cover for layer h (see <ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈^π[∑_h=1^H r_h(_h,_h)]≤^[∑_h=1^H r_h(_h,_h)] + c H^2 √(d A · (d log(2n √(d)H) +ln (n|Φ|/δ)) /α n ) + 2 H^2 d^3/2,
for a sufficiently large absolute constant c>0.
By using that the distributions return by are an (η^3/· d^6 A^2, )-policy cover for η = /(4 H d^3/2) and = (A,H,d, log(|Φ|/δ)) sufficiently large (<ref>), we obtain the claimed sample complexity for <ref> in <ref>.
§ STRUCTURAL RESULTS FOR EXTENDED LOW-RANK MDP
In this section, we present some structural results involving the extented MDP and truncated policy class defined in <ref>. First, we recall the definition of the truncated policy class. Given a parameter η>0, let _0,η, and for each h≥ 1, let _h, η be the set of policies defined by
π∈_h,η∃π'∈_h-1,η : ∀ t ∈[H], ∀ x ∈_t, π(x) = {[ π'(x), if t=h and x ∈_h,η(_h-1,η),; , otherwise, ].
where for a set of policies Π'⊆, we let
_h, η(Π') {x∈_h | max_π∈Π'^π(x) ≥μ̅_h^⋆(x)·η. }.
Note that this matches the definition in (<ref>) because [μ̅^⋆_h(x)]_d+1=0, for all x≠_h. Finally, we let _η_H,η.
The next lemma bounds the probability of the set of states that are not reachable with sufficiently high probability.
Under the normalization assumption (<ref>), we have that for any t∈[H],
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] ≤η· d^3/2.
Fix t∈ [H]. By definition of _t,η(_η), we have that
∀ x∈_t ∖_t,η(_η), sup_π∈_η^π(x) ≤η·^⋆_t(x).
Thus, integrating over x∈_t ∖_t,η(_η), we obtain
sup_π∈_η^π[_t ∈_t ∖_t,η(_η)] = sup_π∈_η∫__t ∖_t,η(_η)^π(x) (x),
= η·∫__t ∖_t,η(_η)μ̅^⋆_t(x)(x), (by (<ref>))
≤η·∫__tμ̅^⋆_t(x)ν̅(x),
= η·∫__tμ^⋆_t(x)ν(x), (since [_t(x)]_d+1=0, ∀ x ≠_t)
≤η d^3/2,
where the last inequality follows by <ref>; this is a consequence of the normalization assumption (<ref>).
The next lemma generalizes <cit.> to s.
For all h ∈[H], x∈_h, and ℓ∈[h H], we have max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x). Further,
∀ x∈_h, max_π∈_h-1, η^π(x) = max_π∈_η^π(x) .
We will show that for all ℓ∈[hH],
∀ x∈_h, max_π∈_ℓ-1,η(x)= max_π∈_ℓ,η(x).
This implies (<ref>) by summing both sides of (<ref>) over ℓ=h,…, H, telescoping, and using that _η=_H, η. To prove the result, let ℓ∈[hH], x∈_h, and π̃∈_π'∈_ℓ-1,η^π'(x). Further, let π∈_ℓ, η be as in (<ref>) with π'=π̃. In this case, by (<ref>), we have π̃(x')=π(x'), for all x'∈_τ, and τ≤ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ-1,η^π̆(x) =^π̃(x)= ^π(x) ≤max_π̆∈_ℓ, η^π̆(x).
We now show the inequality in the other direction. Let ℓ∈[hH], x∈_h, and π̃∈_π̆∈_ℓ,η^π̆(x). Further, let π'∈_ℓ-1, η be as in (<ref>) for π = π̃. In this case, by (<ref>), we have π̃(x)=π'(x), for all τ∈ [ℓ-1]. Using this and the fact that x∈_h and ℓ≥ h, we have
max_π̆∈_ℓ,η^π̆(x) =^π̃(x)= ^π'(x) ≤max_π̆∈_ℓ-1, η^π̆(x).
This shows (<ref>) and completes the proof.
Using <ref> and the definition of _h,η(·) in (<ref>), we obtain the following corollary.
For all h∈[H], it holds that
_h,η(_h-1,η) = _h,η(_η).
The next lemma quantifies the “cost of truncation” incurred by optimizing reward functions using policies in the truncated class _η instead of
Let η∈(0,1), and B_1:H>0, and consider reward functions r_1: _1×→ [-B_1,B_1],…,r_H: _H×→ [-B_H,B_H]. We have
sup_π∈_η^π[ ∑_h=1^H r̅_h(_h,_h) ] ≥sup_π∈^π[ ∑_h=1^H r̅_h(_h,_h) ] - 2 H d^3/2η∑_h=1^H B_h,
where, for each h∈[H], r̅_h(x,a)=r_h(x,a) for all (x,a)∈_h×, and r̅_h(x,a)=0 when x=_h or a=.
Let r̅_1:H be the “extended” reward functions as in the lemma's statement. Let h∈[H] and π_h-1∈_π∈_h-1,η^π[∑_h=1^H r̅_h(_h,_h)]. Further, define π_h as π∈_h,η in (<ref>) with π'=π_h-1. Note that since for all t∈[h-1] and x∈_t, π_h(x)=π_h-1(x) (by (<ref>)), we have
^π_h-1[∑_t=1^h-1r̅_t(_t,_t)] = ^π_h[∑_t=1^h-1r̅_t(_t,_t)].
On the other hand, for _h,η_h,η(_h-1,η) we have
^π_h-1[∑_t=h^H r̅_t(_t,_t)]
= ^π_h-1[∑_t=h^H r̅_t(_t,_t)],
= ^π_h-1[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)]+ ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] ,
= ^π_h[ 𝕀{_h ∈_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] , (by definition of _h,η and π_h)
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∉_h,η}·∑_t=h^H r̅_t(_t,_t)],
= ^π_h[ ∑_t=h^H r̅_t(_t,_t)] - ^π_h[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)] + ^π_h-1[ 𝕀{_h ∈_h∖_h,η}·∑_t=h^H r̅_t(_t,_t)],
where the last equality follows by the fact that I) if _h =_h, then _t=_t for all t∈ [h H], and II) r̅_t(,·)≡ 0, for all t∈ [h … H]. Now, using the range assumption on the rewards, we get
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)] +(^π_h[_h ∈_h ∖_h,η] + ^π_h-1[_h ∈_h ∖_h,η]) ∑_t=h^H B_t.
On the other hand, by <ref> and the fact that π_h-1∈_h-1,η and π_h∈_h,η, we have that
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η].
Furthermore, by <ref>, we have _h,η = _h,η(_η). Combining this with (<ref>) and <ref>, we get
^π_h-1[_h ∈_h ∖_h,η] ∨^π_h[_h ∈_h ∖_h,η]≤sup_π∈_η^π[_h ∈_h ∖_h,η(_η)] ≤η d^3/2.
Plugging this into (<ref>) and using (<ref>) implies that
^π_h-1[∑_t=h^H r̅_t(_t,_t)] ≤^π_h[ ∑_t=h^H r̅_t(_t,_t)]+ 2 η d^3/2∑_h=1^H B_h.
Summing both sides of (<ref>) for h=1,…, H, telescoping, and using that _0,η= and _H,η= _η, we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2∑_h=1^H B_h.
Using this, we now prove <ref>, which allows us to transfer any guarantees in the extended MDP and truncated policies _η back to the original MDP with the unrestricted policy class .
Fix h∈[H], and let y∈_h be such that μ_h^⋆(y)>0. To prove <ref>, we will instantiate <ref> with rewards (r_t) given by
r_t(x,a) = {[ μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ^⋆_h-1(x,a), if t=h and (x,a)∈_h×,; 0, otherwise. ].
We define the extended rewards (r̅_t) such that for all t∈[H], r̅_t(x,a)=r_t(x,a) for all (x,a)∈_t×, and r̅_t(x,a)=0 when x=_t or a=. By applying <ref> (with B_h =1 and B_t=0 for all t≠ h) and using that |r_h(·,·)|≤ 1 (since ϕ^⋆_h-1(·, ·)≤ 1), we get
max_π∈^π[∑_t=1^Hr̅_t(_t,_t)] ≤max_π∈_η^π[∑_t=1^Hr̅_t(_t,_t)] + 2H η d^3/2.
On the other hand, the definition of (r_t) implies that for any π∈,
^π[∑_t=1^Hr̅_t(_t,_t)] = μ_h^⋆(y)^⊤/μ_h^⋆(y)ϕ̃^⋆,π_h-1,
where ϕ̃^⋆,π_h-1^π[ϕ̃^⋆_h-1(_h-1,_h-1)] and ϕ̃^⋆_h-1 is the restriction of ^⋆_h-1 to its first d coordinates (^⋆_h-1 is defined in <ref>). Now, since y≠_h, we have [μ̅_h^⋆(y)]_d+1=0, and so μ^⋆_h(y)^⊤ϕ̃^⋆,π_h-1= ^⋆_h(y)^⊤ϕ̅^⋆, π_h-1. Thus, plugging this into (<ref>) and using <ref>, we get
∀π∈, ^π[∑_t=1^Hr̅_t(_t,_t)] = _h^⋆(y)^⊤/μ_h^⋆(y)ϕ̅^⋆,π_h-1= ^π(y)/μ^⋆_h(y).
Plugging this into (<ref>) and using that ⊆, we have
max_π∈d^π(y)/μ^⋆_h(y) =max_π∈^π(y)/μ^⋆_h(y)≤max_π∈^π(y)/μ^⋆_h(y)≤max_π∈_η^π(y)/μ^⋆_h(y) + 2Hη d^3/2.
Now, suppose that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. By (<ref>), this implies that
max_π∈_η^π(y)/μ^⋆_h(y)≥ 2H η d^3/2≥η,
and so since P is a (α,η)-randomized policy cover relative to _η for layer t in , we have that
max_π∈_η^π(y)/μ^⋆_h(y)≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)].
Combining this with (<ref>) implies that
max_π∈d^π(y)/μ^⋆_h(y) ≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] + 2Hη d^3/2,
≤α^-1_π∼ P^π[d̅^π(y)/μ^⋆_h(y)] +1/2max_π∈d^π(y)/μ^⋆_h(y),
where the last inequality follows by the fact that y is such that max_π∈d^π(y)/μ^⋆_h(y)≥ 4 H η d^3/2. Rearranging the previous display and using that ^π(·)≡ d^π(·) for all policies π that never take the terminal action, we get:
α/2max_π∈d^π(y)/μ^⋆_h(y)≤_π∼ P^π[d^π(y)/μ^⋆_h(y)].
This shows that P is a (α/2, 4 Hη d^3/2)-randomized policy cover.
§ HELPER LEMMAS
For any h∈[2 H], x∈_h, and π∈, we have
d^π(x) = [h](x)^⊤ϕ^⋆, π_h-1, where ϕ^⋆, π_h-1^π[ϕ^⋆_h-1(_h-1,_h-1)],
Let δ∈(0,1) and H≥ 1 be given. If a sequence of events _1,…,_H satisfies [_h|_1,…,_h-1]≥1-δ/H for all h∈[H], then
[_1:H]≥1-δ.
By the chain rule, we have
[_1:H] = ∏_h∈[H][_h|_1,…,_h-1] ≥∏_h∈[H] (1-δ/H) =(1-δ/H)^H ≥ 1-δ.
The normalization assumption in (<ref>) has the following useful implication.
For any h∈[H], if the normalization condition (<ref>) holds, then
∫__hμ^⋆_h(x)ν(x) ≤ d^3/2.
For each i∈[d], if we define g(x)sgn([μ^⋆_h(x)]_i), we have
∫__h |[μ^⋆_h(x)]_i| ν (x) = ∫__h g(x) · [μ^⋆_h(x)]_i ν (x),
= √((∫__h g(x) · [μ^⋆_h(x)]_i ν (x))^2),
≤√(∑_j∈[d](∫__h g(x) · [μ^⋆_h(x)]_j ν (x))^2),
= ∫__h g(x) ·μ^⋆_h(x)ν(x) ,
≤√(d).
Therefore, we have
∫__hμ^⋆_h(x)ν (x)≤∑_i∈[d]∫__h |[μ^⋆_h(x)]_i| ν (x)≤ d^3/2.
Next, we show that the coverability parameter <cit.> constant for s is bounded by d.
For all h∈[H], there exists a measure ρ_h on _h × such that
sup_(x,a)∈_h×sup_π∈d^π(x,a)/ρ_h(x,a)≤ d.
Consider layer h+1. By definition for x ∈_h+1, we have that for any
π, d^π(x) = ^π[
μ_h+1^⋆(x)^⊤ϕ_h^⋆(_h, _h)]=μ_h+1^⋆(x)^⊤ϕ_h^⋆, π. Let
Ψ{π_1, …, π_d} be a barycentric
spanner for the set {ϕ^⋆, π_h |π∈} (see <ref>). Let
π_x denote the policy maximizing d^π(x) (if no such
maximizer exists, we may pass to a maximizing sequence). By definition of a barycentric spanner, there exist β_1, …, β_d ∈ [-1, 1] such that ϕ_h^⋆, π_x = ∑_i=1^d β_i ϕ_h^⋆, π_i, and so
d^π_x(x) = ∑_i = 1^d β_i
μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i≤∑_i = 1^d *β_iμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
≤ d ·∑_i = 1^d 1/dμ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i
=d ·∑_i = 1^d 1/d
d^π_i(x),
where we have used that μ_h+1^⋆(x)^⊤ϕ_h^⋆,
π_i is non-negative.
Thus, by defining ρ_h+11/d∑_i=1^d d^π_i, we obtain the desired result.
Let >0, and B>0 be given. Fix h∈[H] and consider
a sequence of policies π1:K∈ and functions δ1:K:_h×→ [-B,B] such that for all k∈ [2 K],
^k-1[ δk(_h,_h)^2 ] ≤^2, where k-11/k-1∑_ℓ=1^k-1πℓ. Thenmin_k∈[K]^πk[ δk(_h,_h) ] ≤√(2 d ln K) + 2 d B K^-1.
Define k-1(·,·) ^k-1[d^π(·,·)], if k∈[2 K],
and k-1(·,·)≡ 0 if k=1. Further, let
ρ̃k(·,·) d/kρ_h(·,·), where
ρ_h(x,a) is as in <ref>. Finally, for any (x,a)∈_h ×, we define the “burn-in” index
τ_h(x,a) min{ k ∈[K] |d̅k-1(x,a) > (k-1) · d ·ρ_h(x,a) },
and note that τ_h(·,·)>1. Since the coverability constant is bounded by d in s (see <ref>), we have the following facts which follow from the derivations in <cit.>:
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] ≤ 2d B,
∀ (x,a)∈_h ×,∀ k≥τ_h(x,a), d̅k-1(x,a) + ρ̃k(x,a) ≤ 2d̅k-1(x,a).
With this, we have
∑_k=1^K ^πk[ δk(_h,_h) ]
= ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) > k }·δk(_h,_h)] + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
≤ 2 d B + ∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ,
where the last inequality uses (<ref>). We now bound the second term on the of (<ref>). We have
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)]
=∑_k=1^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k } ,
=∑_k=2^K ∑_(x,a)∈_h×dk(x,a) δk(x,a) ·𝕀{τ_h(x,a) ≤ k }, (since τ_h(·,·)>1)
= ∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)(k-1(x,a)/k-1(x,a))^1/2δk(x,a)·𝕀{τ_h(x,a) ≤ k } ,
≤√(∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2 ·𝕀{τ_h(x,a) ≤ k }/k-1(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2), (Cauchy Schwarz)
≤√(∑_k=2^K ∑_(x,a)∈_h×2d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a))·√(∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) ·δk(x,a)^2),
where the last step follows by (<ref>). For the second term in (<ref>), we have
∑_k=1^K ∑_(x,a)∈_h×k-1(x,a) δk(x,a)^2 ≤ K ^2,
by (<ref>).
On the other hand, for the first term on the of (<ref>), we have
∑_k=2^K ∑_(x,a)∈_h×d^πk(x,a)^2/k-1(x,a) + ρ̃k(x,a) ≤∑_k=2^K ∑_(x,a)∈_h×max_ℓ∈ [K] d^πℓ(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a) ,
≤∑_k=2^K ∑_(x,a)∈_h× d ρ_h(x,a)d^πk(x,a)/k-1(x,a) + ρ̃k(x,a),
≤∑_k=1^K ∑_(x,a)∈_h×d ρ_h(x,a)k · d^πk(x,a)/∑_ℓ∈[k-1] d^πℓ(x,a) + dρ_h(x,a),
≤ K d∑_(x,a)∈_h×ρ_h(x,a) ln K,
=K dln K,
where (<ref>) follows by <ref>
and <cit.>. Plugging (<ref>)
and (<ref>) into (<ref>), we get that
∑_k=1^K ^πk[ 𝕀{τ_h(_h,_h) ≤ k }·δk(_h,_h)] ≤ K √(2 d ln K).
Combining this with (<ref>), we get
K ·min_k∈[K]^πk[ δk(_h,_h) ] ≤∑_k=1^K ^πk[ δk(_h,_h) ] ≤ K √(2 d ln K) + 2 d B.
This implies the desired result.
The following is a restatement of Theorem 2.2 in <cit.>.
Let A, E∈^d× d. If A is non-singular and rA^-1E_< 1, then A+E is non-singular and (A+E)^-1- A^-1_≤E_A^-1^2_/(1-r).
PART:
Analysis of
§ ORGANIZATION OF PART:ANALYSISSPANNER
<ref> of the appendix contains the proof
of <ref>, the guarantee for <ref>. This
section is organized as follows:
* In <ref>, we give an overview of (<ref>) and highlight its key differences to (<ref>).
* <ref> contains the proof of <ref>.
* <ref>, provides generic guarantees
for the subroutine of ,
which are used within the proof of <ref>.
* Finally, <ref> compares the
reachability assumption used in the analysis of
to other notions used throughout the literature on RL in Low-Rank MDPs.
We note that the analysis of <ref> in <ref> also makes use of the guarantee of from <ref> in <ref>.
§ : ALGORITHM OVERVIEW
The algorithm is presented in <ref>. The
algorithm proceeds by building a policy cover layer-by-layer in an
inductive fashion. The structure of the algorithm is similar to that
of , with the main difference being that instead of computing
an optimal design, the algorithm computes a barycentric spanner
for the feature map.
In more detail, for each layer h≥2, uses a policy cover
Ψh built at a previous iteration within the
(<ref>) subroutine to produce a
feature map h that approximates . Using this feature map, the algorithm invokes a second subroutine, (<ref>) to produce a collection of policies π_1,…,π_d that act as a barycentric spanner for the
feature map, ensuring maximal coverage; given these policies, a new policy cover for layer h+2 is formed via Ψh+2={π_i∘_h+1π_ : i∈[d] }. To invoke the
subroutine, makes use of for policy optimization and
(<ref>) for estimation of vector-valued
functionals. Compared to , there is no inner loop (i.e.,
K=1); this is facilitated by the reachability assumption.
In what follows, we expand on the main differences between .and , focusing on the role of barycentric spanners
Barycentric spanners
uses the notion of a barycentric spanner
<cit.> as an efficient basis for exploration. We
define a barycentric spanner for an abstract set as follows
Given a set ⊂^d such that () = ^d, we say that a set { w_1, …, w_d }⊆ is a (C, )-approximate barycentric spanner for if for every w ∈, there exist β_1, …, β_d ∈ [-C, C] such that w - ∑_i = 1^d β_i w_i≤.[Note that our definition is a slight generalization of <cit.>; the latter is recovered with = 0.]
The following result shows that for Low-Rank MDPs, barycentric spanners
offer a compact representation for policy covers.
Suppose <ref> holds with η>0. If Ψ⊆ is a collection of policies such that {^π[
(_h, _h) ]|π∈Ψ}⊆^d is a (C,
)-approximate barycentric spanner for _h{^π[
(_h, _h) ]|π∈} with ≤η/2, then Ψ is an (α,0)-policy cover for layer h+1 with α = (2dC)^-1.
<Ref>, proven in <Ref>, shows that to compute a policy
cover for layer h+1, it suffices to find a barycentric spanner for the
set _h{^π[
(_h, _h) ]|π∈}⊆^d. Similar to the approach to optimal design computation in
, we show that barycentric spanner computation can be
efficiently reduced
to policy optimization:
* Using, , a novel adaptation of the classical algorithm of
<cit.>, it holds that for any ϕ∈Φ,
spanner computation for the set {^π[
ϕ(_h, _h) ]|π∈} can be performed efficiently whenever, for any θ∈(1), one can (approximately) solve linear optimization problems of the form
_π∈^π*θ^ϕ(_h,_h).
* Given access to policy covers Ψ1:h for layers 1 to h, one can efficiently solve the optimization problem in (<ref>) by
appealing to (<ref>).
To handle the fact that is unknown, <ref> computes policies π_1:d that induce a barycentric spanner for the set {^π[
h(_h, _h) ]|π∈}, where
h∈Φ is an estimated feature map produced by
. In what follows, we give a detailed overview of how the
subroutine achieves efficient spanner computation.
Barycentric spanner computation via approximate linear optimization
We consider an abstract framework for
barycentric spanner computation, which generalizes the problem faced
within . Suppose that we wish
to compute a spanner for an implicitly specified set
=*w^z_z∈⊆^d indexed by an abstract set
.
To allow for efficient spanner computation without resorting to
enumeration over the set , we assume access to two
oracles for the set , a linear optimization oracle :(1)→ and
an index-to-vector oracle :→^d. We assume that for some >0:
* For all θ∈^d with *θ=1, the output
ẑ_θ(θ) satisfies
θ^⊤w^ẑ_θ≥sup_z∈θ^⊤ w^z -.
* For all z∈, the output ŵ_z(z)
satisfies
ŵ_z - w^z≤.
The algorithm
(<ref>) computes a (C,)-approximate spanner for
using
(dlog(d/)) total calls to and . is an error-tolerant variant of the classical spanner computation algorithm of
<cit.>, which was originally introduced and
analyzed for
spanner computation with an exact linear optimization
oracle. Tolerance to approximation errors in the linear optimization oracle
is critical for our application to RL, where additive
errors will arise in sampling trajectories, as well as estimating
the feature maps ()_h∈[H]. achieves error tolerance by
perturbing the vectors returned by (θ) in the direction of
θ, which amounts to running the classical algorithm on an -fattening of , and is necessary in order to ensure that the approximation error of does not swamp the signal in directions θ in which is too “skinny.” This technique may be of independent interest; see <ref>
for additional details and formal guarantees.
Putting everything together Equipped with an estimated
feature map h from , applies
to the set {^π[h(_h,
_h)]|π∈} with
= and C = 2; that is, we plug-in the learned
representation h for the true representation
.[Though the policies produced by the algorithm may not necessarily induce a spanner for _h= {^π[
(_h, _h) ]|π∈} (this would
require “point-wise” representation learning guarantees, which we do
not have), our analysis shows that they still suffice to build a policy cover for layer h+2.]
With this choice, implementing
entails (approximately) solving
_π∈^π[ θ^⊤h(_h, _h)]
for a given θ∈(1), and implementing the oracle
entails estimating
^π[h(_h, _h)]
for a given π∈.
We instantiate (π) as the Monte Carlo algorithm
(<Ref>). To
implement (θ), we invoke with the rewards neurips
r_h(x, a; θ) = h(x,a)^⊤θ, and r_t(x,a; θ) = 0, for t ≠ h.
r_t(x,a;θ){[ h(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
§ ANALYSIS: PROOF OF THM:SPANRLMAIN
In this section, we prove the main guarantee for (<ref>). First, we outline our proof strategy in <ref>. Then, in <ref> and <ref>, we present guarantees for the instances of (<ref>) and (<ref>) used within . We then combine these results in <ref> to complete the proof of <ref>. A self-contained guarantee for (<Ref>) is given in <Ref>.
§.§ Proof Strategy
Like the proof of <ref> for , the proof of <ref> is inductive. However, due to the assumption of reachability, the proof does not make use of the extended MDP analysis used in the proof of <ref>, making it somewhat simpler.
For fixed h, we assume that the policy set Ψ1:h+1 produced by satisfies the property:
Ψ1,…Ψh+1 are (1 Ad,0)-policy covers for layers 1 through h+1, and max_t∈[h+1]|Ψt|≤ d.
Conditioned on this claim, we show that with high probability, the set Ψh+2 is a (1/4 A d,0)-policy cover for layer h +2. To prove this, we use the inductive assumption to show that acts as an approximate linear optimization oracle over = {^π[ h(_h, _h) ] |π∈} (<Ref>). Using this, we then instantiate the guarantee of from <ref> with and instantiated with and . To conclude the proof of the inductive step, we the main guarantee for together with the main guarantee for (<Ref>), along with a change of measure argument enabled by the assumption that Ψ1:h are policy covers (i.e. (<ref>)).
§.§ Guarantee for as a Subroutine for
We begin by showing that , as configured within , acts as an approximate linear optimization oracle as required by . In particular, we fix a layer h, assume that Ψ1:h+1 satisfy (<ref>), apply the generic guarantees for in <Ref>.
Define function classes _1:h such that for each t∈[h],
_t {g:(x,a)↦ϕ(x,a)^⊤ w |ϕ∈Φ, w ∈(2√(d))}.
Given θ∈(1) and ϕ∈Φ, consider the reward functions r'_1:h(·,·;θ, ϕ) given by:
∀ (x,a)∈×, r'_t(x,a;θ,ϕ){[ ϕ(x,a)^⊤θ, for
t=h,; 0, otherwise. ].
With these rewards and function classes, we show that the output
= (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n),
where Pt(Ψt), for each t∈[h], approximately solves
max_π∈θ^π[ ϕ(_h, _h) ]
with high probability if n≥ 1 is sufficiently large. Note that this matches the choice of reward functions in (<ref>) at iteration h with ϕ = ϕh, the feature map returned by in <ref>.
We first verify that the classes _1:h realize the reward functions specified in (<ref>) in the sense of <Ref>.
Under <ref>, the function classes _1:h in (<ref>) realize (<ref>) the reward functions in (<ref>) for any ϕ∈Φ and θ∈(1). Furthermore, the functions in _1:h are uniformly bounded by 2√(d), and ln__t()≤ln |Φ|+ d ln (2√(d) /), for all t∈[h], where we recall that _() denotes the -covering number of in ℓ_∞-distance (see <ref>).
Fix ϕ∈Φ and θ∈(1), and let r'_t(·,·)≡ r'_t(·,·; θ, ϕ), for t∈[h]. Further, for t∈[h] and π∈^t+1:h, we define the state-action value function (Q-function) at layer t with respect to the rewards r'_1:h and partial policy π:
∀ (x,a)∈_t×, Q^π_t(x,a) r'_t(x,a)+^π[.∑_ℓ=t+1^h r'_ℓ(_ℓ,_ℓ) | _t=x,_t=a].
For t=h, we clearly have that for any π∈^h:h, Q^π_h(·,·)=r'_h(·,·)∈_h. For t<h and π∈^t+1:h, we have by the low-rank structure that
Q^π_t(x,a) = ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·ϕ^⋆_t(x,a)^⊤μ_t+1^⋆(y) ν (y),
= ϕ^⋆_t(x,a)^⊤( ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y)).
Now, by the fact that ^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ∈ [-1,1], for all y∈_t+1 (since ϕ(·,·)∈(1), for all ϕ∈Φ), and the normalizing assumption made on ([h])_h∈[H] in <ref> (i.e. that for all g:_t+1→0,1, *∫__t+1[t+1](y)g(y) ν(y)≤√(d)), we have that
w_t ∫__t+1^π[r'_h(_h,_h)|_t+1=y,_t+1=π(y)] ·μ_t+1^⋆(y) ν (y) ∈(2√(d)).
This, together with (<ref>) and the fact that [t]∈Φ (by <ref>), implies that that Q_t^π∈_t. The bound on the covering number __t(), follows from a standard bound on the covering number of the ball (2√(d)) <cit.>..
Combining <Ref> with <Ref> (with =0) results in the following bound on the quality of as an approximate linear optimization oracle.
Let ,δ∈(0,1) be given and fix h∈[H]. Given θ∈(1) and ϕ∈Φ, let be the output of when given input (h, r'_1:h(·, ·;θ,ϕ), _1:h, P1:h, n), where
* The reward functions r'_1:h(·, ·;θ,ϕ) are as in (<ref>).
* The function classes _1:h are as in (<ref>).
* Pt(Ψt), for each t∈[h], and the collection of policies Ψ1:h satisfy (<ref>).
Then, under <ref>, with probability at least 1-δ, we have that
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + _(n,δ),
where _(n,δ) c H A d √(d n^-1 (d ln (2n d^1/2)+ln (|Φ|/δ))) for a sufficiently large absolute constant c>0.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within . We first show that (<Ref>) is a valid choice for the subroutine passed to .
Let δ∈(0,1), h∈[H], ϕ∈Φ, π∈, and n∈ℕ be given. The output _h= (h,ϕ,π, n) (<ref>) satisfies, with probability at least 1-δ,
_h - ^π[ϕ(_h,_h)] ≤_(n,δ),
where _ c ·√(n^-1·log (1/δ)) and c>0 is a sufficiently large absolute constant.
By a standard vector-valued concentration bound in euclidean space (see for example <cit.>) and the fact that ϕ(x, a)≤ 1 for all x ∈ and a ∈, there exists an absolute constant c>0 such that with probability at least 1 - δ,
_h - ^π[ ϕ(_h, _h) ]≤ c ·√(log(1/δ)/n).
Recall that in , we instantiate passing as and as . Combining <Ref> with the general guarantee for in <Ref>, we have the following result.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with ,>0, δ∈(0,1), and feature class Φ satisfying <ref>. Further, let h denote the feature map returned by in <Ref> at iteration h. If Ψ1:h in <ref> satisfy (<ref>) and =(A,d,H,ln(|Φ|/δ)) is sufficiently large, then with probability at least 1 - δ/2H, we have that
* The number of iterations of in <Ref> of <Ref> is at most N ⌈d/2log_2( 100d/)⌉.
* The output (π_1, …, π_d) of has the property that for all π∈, there exist β_1,…,β_d∈[-2,2] such that
*^(h),π - ∑_i=1^d β_i ^(h),π_i≤ 3 d , where ^(h),π'^π'[h(_h,_h)].
By <Ref>, on the event that the instances of and used by satisfy <Ref> with ' = /2, the two prerequisite assumptions of the lemma hold; We instantiate the guarantee in <ref> with C=2, as used by <ref>. We claim that each call to and to satisfies <Ref> with probability at least 1- δ/8 d N H. Because each of and get called at most 4 d N times per iteration of , a union bound concludes the proof contingent on this claim.
We now prove the claim. First, note that the instance of that uses within <ref> is of the form:
(h, r_1:h(·, ·, θ), _1:h, P1:h, n_)
with r_1:h and _1:h as in <Ref>, and Pt(Ψt) for each t∈[h]; this matches the form in <Ref> ('s guarantee) with ϕ = ϕh, which implies that with probability at least 1- δ/8 d N H, the output of of the instance in (<ref>) satisfies:
max_π∈θ^⊤^π[ϕ(_h,_h)]≤θ^⊤^[ϕ(_h,_h)] + c θ H A d √(d · (d ln (2n_ d^1/2)+ln (8 dNH|Φ|/δ))/n_),
for a sufficiently large absolute constant c>0. Thus, by choosing
n_ = ·^-2 A^2 d^3 H^2 · (d +ln (|Φ|/δ)),
for =(A,d,H,ln(|Φ|/δ)) sufficiently large, the of (<ref>) is bounded by θ/2, which implies the claim for the invocation of within . Similarly, the choice of n_ in <Ref> ensures that the claim holds for the invocation of within by <Ref>.
§.§ Guarantee for as a Subroutine for
In this section, we prove a guarantee for the invocation of within
Recall that Ph= (Ψh) is the distribution over policies that passes to at iteration h∈[H-2] to compute feature map ϕh. Thus, by invoking <ref> in <ref> and using the choice of n_ in <ref>, we immediately obtain the following corollary.
Let δ,∈(0,1), and be as in <ref>, and fix h∈[H-2]. Suppose that the feature class Φ satisfies <ref>. Then, with probability at least 1-δ/2H, the instance of in <ref> of <ref> runs for t≤· d iterations for = (A,d,H,log(|Φ|/δ)) sufficiently large, and returns output ϕh such that for all f∈, there exists w_fh∈(3d^3/2) satisfying
^(Ψh)[∑_a∈(ϕh(_h,a)^⊤wh_f - ϕ_h^⋆(_h,a)^⊤w_f)^2] ≤η^2/64 A^2 d^2,
where w_f ∫__h+1 f(y) (y) ν(y).
§.§ Concluding the Proof of thm:spanrlmain
In this section, we conclude the proof of the main guarantee (<ref>). We derive the guarantee from the following inductive claim.
Consider iteration h∈ [H] of (Φ,,,δ) (<ref>) with parameters ,>0, δ∈(0,1) and a feature class Φ satisfying <ref>. Further, assume that:
* The collection of policies Ψ1:h+1 at the start of the hth iteration of satisfy (<ref>).
* <ref> (reachability) holds with η>0.
* The input parameter to is set to =η/36 d^5/2.
* The input parameter =(A,d,H,ln (|Φ|/δ)) is sufficiently large.
Then, with probability at least 1-δ/H, the set of policies Ψh+2 produced by (Φ,,,δ) at the end of iteration h is an (1/ Ad,0)-policy cover for layer h+2.
With this, we can now prove <ref>.
Note that it suffices to prove that (<ref>) holds for h=H-1 with probability at least 1-δ. To do this, we proceed by induction over h=1,…,H-1. The base case of h=1 trivially holds because Ψ1=∅ and Ψ2={π_}. The induction step now follows by <ref> and the union bound (see <ref>).
The number of trajectories used by is dominated by calls to . Since is called O(dln (d/)) times at each iteration of (<ref>), and each call to requires at most H n_ trajectories, the total number of trajectories after H iterations of is bounded by O(H^2 d n_). By plugging the choices for n_ and from the theorem statement, we obtain the claimed sample complexity.
Before proving <ref>, we make the following simple observation.
For any π∈, h∈ [H-1], any x∈_h+1, we have
(x)^⊤^π[ϕ_h^⋆(_h,_h)]=d^π(x)≥ 0.
The equality follows by construction. The non-negativity of d^π(x) follows by definition of a probability density.
We now prove <ref>.
Let _h and _h' denote the success events in <ref> and <ref>, respectively, and note that by the union bound, we have [_h ∩_h']≥ 1 - δ/H. For the rest of this proof, we will condition on _h ∩_h'.
Throughout, we denote
ϕ_h^⋆,π^π[ϕ_h^⋆(_h,_h)], ∀ h∈[H], ∀π∈.
Because Ψ1:h+1 satisfy (<ref>) (i.e., are a policy cover) it holds by <Ref> that for all x∈_h,
max_π∈Ψh[h](x)^⊤ϕ_h-1^⋆,π≥α·sup_π∈[h](x)^⊤ϕ_h-1^⋆,π, for α1/ A d.
We will show that with probability at least 1-δ/H, the policy set Ψh+2 has the same property for layer h+2; that is, for all x∈_h+1,
max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆,π≥α·sup_π∈[h+2](x)^⊤ϕ_h+1^⋆,π.
Again, by <ref> this is equivalent to the statement that Ψh+2 is an (1/ Ad,0)-policy cover for layer h+2.
For the remainder of the proof, we will fix x∈_h+2 and let π_x ∈_π∈[h+2](x)^⊤ϕ_h+1^⋆,π. Our goal is to show that the inequality <ref> holds for x.
Preliminaries Note that since x∈_h+2, we have [h+2](x)>0. It will be convenient to introduce a function f: _h+1→ defined by
f(y)θ_x^⊤ϕ^⋆_h+1(y,π_x(y)), where θ_x [h+2](x)/[h+2](x).
Further, we define
w_x ∫__h+1 f(y) (y) ν(y).
By definition of π_x, we have that for all y∈_h+1,
θ_x^⊤ϕ^⋆_h+1(y,π_x(y)) = max_a∈θ_x^⊤ϕ^⋆_h+1(y,a).
This together with the fact that θ_x=1 implies that
f ∈ = {. x ↦max_a∈θ^⊤ϕ(x,a) | θ∈(1), ϕ∈Φ};
the discriminator class in <ref> of .
Note also that since x∈_h+2, we have by reachability that
w_x^⊤ϕ_h^⋆, π_x= θ_x^⊤ϕ_h+1^⋆,π_x=1/*[h+2](x)max_π∈[h+2](x)^⊤ϕ_h+1^⋆,π≥η>0.
Applying the guarantee for
Moving forward, let h be the feature map returned by at the hth iteration of <ref>, and define ϕ^(h),π^π[ϕh(_h,_h)], for any π∈. Further, let w_xh be the vector w_fh in <ref> with f=f_x, and note that
w_xh≤ 3 d^3/2.
By Jensen's inequality, we compute
( wh_xϕ^(h),π_x- w_xϕ_h^⋆, π_x)^2
≤^π_x[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2], (Jensen's inequality)
= ∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π_x_h-1ν(y), (Low-Rank MDP)
≤α^-1max_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x -ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by (<ref>))
≤α^-1∑_π̃∈Ψh∫__h( h(y,π_x(y))^⊤ wh_x - ϕ_h^⋆(y,π_x(y))^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y), (by <ref>)
≤α^-1∑_π̃∈Ψh∑_a∈∫__h( h(y,a)^⊤ wh_x - ϕ_h^⋆(y,a)^⊤ w_x )^2[h](y)^⊤ϕ^⋆,π̃_h-1ν(y),
=A α^-1 d·^(Ψh)[( h(_h,_h)^⊤ wh_x - ϕ_h^⋆(_h,_h)^⊤ w_x )^2],
where the last step follows by the definition of Ψh in <ref> and that Ψh = d. Now, since w_x = ∫__h+1 f(y) (y) ν(y) (see (<ref>)) and f∈ (see (<ref>)); the guarantee for in <ref> together with (<ref>) implies that (conditioned on the event )
| wh_x^(h),π_x- w_xϕ_h^⋆, π_x| ≤√(A dη^2/64 α A^2 d^2)≤η/4.
Applying the guarantee for
Letting π_1,…,π_d be the policies returned by at iteration h of , the guarantee of in <ref> implies that there exist β_1, …, β_d∈[-2,2] such that
*ϕ^(h),π_x-∑_i=1^d β _iϕ^(h),π_i≤ 3 d ≤η/12 d^3/2,
where the last inequality follows by the fact that = η/36 d^5/2. Combining (<ref>) with (<ref>) and using the triangle inequality, we get that
w_x^⊤ϕ_h^⋆, π_x ≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + wh_x·η/12 d^3/2 +η/4,
≤∑_i=1^d β_i w_x^⊤ϕ_h^⋆, π_i + η/4+η/4, (by (<ref>))
≤ 2d max_i∈[d] w_x^⊤ϕ_h^⋆, π_i + η/2.
Combining this with (<ref>) and rearranging implies
w_x^⊤ϕ_h^⋆, π_x≤ 4d·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i.
On the other hand, by definition of w_x, we have
max_i∈[d] w_x^⊤ϕ_h^⋆, π_i = max_i∈[d]θ_x^⊤ϕ_h+1^⋆, π_i∘_h+1π_x,
= 1/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_x[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)],
≤A/*[h+2](x)max_i∈[d]^π_i ∘_h+1π_[[h+2](x)^⊤ϕ_h+1^⋆(_h+1,_h+1)], (see below)
= A/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π,
where the inequality follows from the non-negativity of _h+1(·)_h+1(x,a), for all (x,a)∈_h× (due to <Ref>), and (<ref>) follows from the definition of Ψh+2 in <Ref> of <Ref>. Combining (<ref>) and (<ref>) then implies that
1/*[h+2](x)[h+2](x)^⊤ϕ_h+1^⋆, π_x =θ_x^⊤ϕ_h+1^⋆,π_x= w_x^⊤ϕ_h^⋆, π_x ≤ 4d ·max_i∈[d] w_x^⊤ϕ_h^⋆, π_i,
≤4 A d/*[h+2](x)max_π∈Ψh+2[h+2](x)^⊤ϕ_h+1^⋆, π.
This, together with <ref>, implies that (<ref>) holds. Since this argument holds uniformly for all x∈_h+2, this completes the proof.
§.§ Proof of lem:barycentricspannerknownphi
By definition for x ∈_h+1, we have d^π(x) = ^π[ (x)^⊤[h](_h, _h)]. Let π_x denote the policy maximizing d^π(x) (if no such maximizer exists, we may pass to a maximizing sequence) and let Ψ = {π_1, …, π_d }. Then, we have for some β_1, …, β_d ∈ [-C, C],
d^π_x(x) = (x)^⊤(∑_i = 1^d β_i [π_i]) + (x)^⊤( [π_x] - ∑_i = 1^d β_i[π_i]),
≤ C d ·max_i ∈[d](x)^⊤[π_i] + ·(x)
, (Cauchy-Schwarz)
≤ C d ·max_i ∈[d](x)^⊤[π_i] + 1/2d^π_x(x),
where the inequality follows by the fact that <ref> holds with ≤η/2. The result now
follows by rearranging.
§ GENERIC GUARANTEE FOR
In this section, we give a generic guarantee for the algorithm when invoked with oracles and satisfying the following assumption.
[ and as approximate Linear Optimization Oracles]
For some abstract set and a collection of vectors {w^z∈^d | z∈} indexed by elements in , there exists '>0 such that for any θ∈^d∖{0} and z∈, the outputs ẑ_θ(θ/θ) and ŵ_z (z) satisfy
sup_z∈θ^⊤ w^z≤θ^⊤ w^ẑ_θ +' ·θ, and ŵ_z - w^z≤' .
Letting {w^z | z∈} and assuming that ⊆(1), the next theorem bounds the number of iterations of ((·),(·), ·,·) under <ref>, and shows that the output is an approximate barycentric spanner for (<ref>). Our result extends those of <cit.>, in that it only requires an approximate linear optimization oracle, which is potentially of independent interest.
Fix C>1 and ∈(0,1) and suppose that {w^z | z ∈}⊆(1). If (<Ref>) is run with parameters C, >0 and oracles , satisfying <ref> with '=/2, then it terminates after d + d/2log_C100 d/^2 iterations, and requires at most twice that many calls to each of and . Furthermore, the output z_1:d has the property that for all z∈, there exist β_1,…,β_d∈[-C,C], such that
*w^z - ∑_i=1^dβ_i w^z_i≤3Cd ·/2.
The proof will follows similar steps to those in <cit.>, with modifications to account for the fact that linear optimization over the set {w^z | z∈} is only performed approximately.
Part I: Bounding the number of iterations
In <Ref>, there are two loops, both of which require two calls to and per iteration. As the first loop has exactly d iterations, it suffices to bound the number of iterations in the second loop.
Let Mi (w_1,…, w_i, e_i+1, …, e_d) be the matrix whose columns are the vectors at end of the ith iteration of the first loop (<ref>) of <ref>; note that columns i+1 through d are unchanged at this point in the algorithm. For i∈[d], we define ℓ_i(w) (w,Mi_-i) and θ_i((e_j, Mi_-i))_j∈ [d]∈^d, where we recall that for any matrix A, the matrix A_-i is defined as the result of removing the ith column from A. Note that ℓ_i is linear in w, and in particular
ℓ_i(w) w^⊤θ_i.
Let W0 Md = (w_1, …, w_d), and let Wj denote the resulting matrix after j iterations of the second loop (<Ref>) of <ref>. We will show that for any J≥ 1,
(WJ) ≤(W0) ·( 100 d/^2)^d/2.
By construction of the loop, we have (Wj) ≥ C ·(Wj-1) for each j ∈[J], and thus (WJ) ≥(W0) · C^J. Combining these two facts will establish the bound on the iteration complexity. We now prove (<ref>).
Let u_i = e^⊤_i(Mi)^-1 (note that u_i is a row vector) and let U denote the matrix whose ith row is u_i. We observe that for all w ∈^d,
u_iw = ℓ_i(w)/ℓ_i(w_i),
where we note that ℓ_i(w_i) ≠ 0 by construction; indeed, the columns of Mi are a basis for ^d because (Mi) ≠ 0, and the equality holds on the columns, so the two linear functions must be equal. Now, since <ref> holds with '=/2, we have
θ^⊤_iw_i^+≥sup_z ∈θ^⊤_iw^z - /2θ_i, and θ^⊤_iw_i^-≤inf_z ∈θ^⊤_iw^z + /2θ_i,
where w_i^± = (z_i^±). We will now show that
ℓ_i(w_i) ≥/2·θ_i.
There are two cases. First, suppose that θ^⊤_iw_i^+≥ - θ^⊤_iw_i^-, corresponding to the conditional in <Ref> of <ref> being satisfied. Combining this with (<ref>), we have
θ_i^⊤ w_i^+ ≥( sup_z∈θ_i^⊤ w^z -/2θ_i) ∨ (-θ_i^⊤ w_i^-),
≥( sup_z∈θ_i^⊤ w^z -/2θ_i)∨( sup_z∈ -θ_i^⊤ w^z -/2θ_i), (by (<ref>))
= ( sup_z∈θ_i^⊤ w^z )∨( sup_z∈ -θ_i^⊤ w^z ) - /2θ_i,
≥ - /2θ_i.
Because the conditional is satisfied, w_i = w_i^+ + ·θ_i/θ_i, and so by plugging this into (<ref>), we have
ℓ_i(w_i) = θ^⊤_iw_i≥/2·θ_i.
The case that θ^⊤_iw_i^+≤ - θ^⊤_iw_i^- is essentially identical, establishing (<ref>). Now, recall that { w^z | z ∈} and let ⊕( 3/2) { w + b | w ∈ and b ∈( 3/2) } denote the Minkowski sum with ( 3/2). By Cauchy-Schwarz, it holds that for all w' w + b ∈⊕( 3/2),
ℓ_i(w') = θ^⊤_iw' = θ^⊤_iw + θ^⊤_ib≤( 1 + 3 /2) ·θ_i,
where we used that ⊆(1) (by assumption). Thus, for any w' ∈⊕( 3/2), we have
u_iw' = ℓ_i(w')/ℓ_i(w_i)≤ 1+3 /2 .
We now observe that by construction and the fact that <ref> holds with '=/2, the kth column w_k' of WJ belongs to ⊕( 3 /2), for any k∈[d]. Thus, the (i,k) entry u_iw_k' of U WJ satisfies u_iw_k'∈[-1 - 3 /2, 1+ 3 /2], and so the columns of U WJ have Euclidean norm at most 10 √(d)/. Since the magnitude of the determinant of a matrix is upper bounded by the product of the Euclidean norms of its columns, it holds that (U WJ)≤( 100 d/^2)^d/2.
On the other hand, again by construction, we see that the columns w_1,…, w_d of W0 satisfy u_iw_j=0, for j<i, and u_iw_i=1. Thus, U W0 is an upper-triangular matrix with 1s on the diagonal, and hence has determinant 1. Because determinants are multiplicative, this implies that (U) ≠ 0. We now compute:
(WJ) = (U WJ)/(U) = (U WJ)/(U W0)≤( 100 d/^2)^d/2.
Thus, the upper bound on (WJ) holds and the claim is proven. Therefore, we have
C^J ≤( 100 d/^2)^d/2,
and so J ≤⌈d/2log_C( 100 d/^2)⌉.
Part II: Spanner property for the output Having shown that the algorithm terminates, we now show that the result is an approximate barycentric spanner for . Let W (w_1, …, w_d) be the matrix at termination of the algorithm. By definition, if the second loop (<Ref>) has terminated, then for all i∈[d],
max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i≤ C · |(w_i,W_-i)|,
where θ_i = ((e_j, W_-i))_j∈[d]∈^d. On the other hand, by <ref>, (<ref>) holds, and so
∀ z∈, ∀ i ∈ [d], |(w^z,W_-i)| = |θ_i^⊤ w^z| ≤max(θ_i^⊤ w_i^+, - θ_i^⊤ w_i^-) +·θ_i,
≤ C· |(w_i,W_-i)|.
Now, fix z∈. Since (W) ≠ 0, there exist β_1:d∈ such that w^z= ∑_i=1^d β_i w_i. By plugging this into (<ref>) and using the linearity of the determinant, we have
∀ i∈[d], C· |(w_i,W_-i)| ≥ |(w^z,W_-i)| = |∑_j=1^d β_i (w_j,W_-i)| = |β_i| · |(w_i,W_-i)|.
Therefore, |β_i|≤ C, for all i∈[d]. Now, by definition of w_1:d and w_1:d, for all i∈[d], we have that w_i - w_i≤. Furthermore, by <ref>, we also have that w_i -w^z_i≤/2. Therefore, by the triangle inequality, we have
w^z- ∑_i=1^d β_i w^z_i≤w^z- ∑_i=1^d β_i w_i + ∑_i=1^d|β_i| w_i - w^z_i + ∑_i=1^d|β_i| w_i - w_i ≤ 3d C /2.
This completes the proof.
§ PROPERTIES OF REACHABILITY ASSUMPTION
In this section, we compare the η-reachability assumption used by
(<ref>) to different reachability
assumptions used throughout the literature on RL in Low-Rank MDPs. In
<ref>, we demonstrate an exponential separation
between our notion of reachability and notions considered in the so-called latent variable model <cit.>. In <ref>, we consider a number of other reachability assumptions and show that they imply <Ref>.
§.§ Comparison to Latent Variable Model
In this subsection, we show that our reachability assumption is
implied a reachability assumption used by
<cit.> in the latent
variable/non-negative feature model, and show that our reachability
assumption can hold even when the best possible latent variable
embedding dimension is exponential in the dimension d. We begin by
defining the latent variable model.
Givn a transition operator T:
×→Δ(), a latent variable representation consists of a countable latent space and functions ψ:×→Δ() and
q:→Δ(), such that T(·| x,a) = ∑_z∈
q(·| z) ψ(z | x,a). The latent variable
dimension of T, denoted is the cardinality of smallest
latent space for which T admits a latent variable
representation.
The interpretation for the latent variable model is as follows:
* Each (x,a) pair
induces a distribution ψ(x,a) ∈Δ()
over z∈.
* The latent variable is sampled as ∼ψ(x,a).
* The next state is sampled as '
∼ q(·|).
Note that in discrete state spaces, all transition operators admit a trivial latent variable
representation, as we may take ψ(x,a) = T(·| x,a), but
the dimension of such a representation is potentially infinite. A latent
variable representation certifies that there exists a factorization T(x' | x,a) =
ψ(x,a)^⊤ q(x') with embedding dimension ||, and so
, and hence gives an upper bound on the rank of the
transition operator. On the other hand, compared with the general Low-Rank factorization,
the latent variable factorization additionally requires that ψ(x,a)
and q(·| z) are probability distributions, and thus
non-negative, for all z∈ and (x,a)∈×,
implying that is equivalent to the non-negative rank <cit.> of the transition operator.
Assuming that a latent variable representation exists, <cit.> consider the following notion of reachability.
There exists η>0 such that
∀ h∈[H-1], ∀ z∈_h+1, sup_π∈^π[_h+1=z]≥η.
We first show the latent variable reachability condition above implies our more general assumption.
Consider a Low-Rank MDP with rank d≥ 1. Under the
latent variable model in <ref>, if the latent
variable reachability condition in (<ref>) is satisfied for some η>0, then, for all h∈[H], the transition kernel T_h in admits a factorization T_h(·| x,a)=(·)^⊤(x,a), where (·)∈^ and (·,·)∈^, such that ≤ d A^2/η^2 and η^2/A √(d)-reachability (in the sense of <ref>) is satisfied.
Suppose that <ref> (η-reachability) holds. By <cit.>, the non-negative rank of is bounded as ≤ d A^2/η^2.
Letting q and ψ be as in the definition of the latent variable representation in <ref>, we define and as: for all h∈[H-1],
(·) (q(·| z))_z∈∈^, and (·,·) (ψ(z|· , ·))_z∈∈^.
Now, fix h∈[H-1] and x∈_h+1. For z_0∈_z∈_h+1q(x| z), we have
sup_π∈ d^π(x)= ^π[_h+1 = x] = sup_π∈∑_z∈_h+1
q(x | z) ·^π[ψ(z |_h,_h)],
=sup_π∈
q(x | z_0) ·^π[ψ(z_0 |_h,_h)],
= (x)_∞·sup_π∈^π[_h+1=z_0],
≥η·(x)_∞ , (using reachability)
≥η/√()·(x).
We now complement the result above by showing that there
exists low-rank MDPs for which our notion of reachability
(<ref>) is satisfied with η
polynomially small, yet the best possible latent variable
embedding has dimension =2^Ω(d). This contrasts
the results in <cit.>, which
show that latent variable reachability implies a polynomial
bound on the latent variable dimension.
There exists a one-step Low-Rank-MDP of rank d≥1, where η-reachability (<ref>) is satisfied with η=1/2√(d), but where the non-negative rank satisfies =2^Ω(d).
Let n ∈ℕ and d n 2 +1. As shown
in the proof of <cit.>, there exists
a horizon-two MDP with the following properties:
* The state spaces _1 and _2 at layers 1 and 2, respectively, are finite.
* The cardinality of is d; i.e. = {a_1,…, a_d}.[Technically, the example in the proof of <cit.> does not explicitly specify the number of actions. Instead, the example assigns a number of state-action pairs to vectors in ^d, without specifying the number of actions. The number of actions in their example is a degree of freedom, which we set to d here without loss of generality.]
* The transition kernel T_1 admits the factorization:
T_1(·| x,a) = [2](·)^⊤ϕ_1^⋆(x,a)∈Δ(_2), ∀ (x,a)∈_1×,
where for all x'∈_2, [2](x')∈_≥ 0^d, and for all (x,a)∈_1 ×, ϕ_1^⋆(x,a)∈_≥0^d.
* The non-negative rank of is =2^Ω(d).
We augment this MDP by adding an extra state , and let
_1_1∪{}. We define
_1^⋆:_1×→_≥0^d be the
extension of ϕ_1^⋆ given by
∀ i∈[d], _1^⋆(, a_i)= e_i, and ∀ x ∈_1, _1^⋆(x, a_i)= ϕ_1^⋆(x,a_i),
where e_i is the ith basis element in ^d. We define the
initial state distribution to have ρ()=1/2 and
ρ(x)=1/2 |_1|, for all x∈_1.[We note
that <cit.> did not specify the initial
distribution, which is not needed for the conclusion of their
result.] We let =(_1∪_2,,
_1^⋆,([h])_h∈[2],) denote the resulting
MDP. Note that adding an extra state at layer 1 in this fashion only adds d additional rows to the transition matrix T (viewed as a (|_1×|)× |_2| matrix). Therefore, the non-negative rank of is as least that of .
We now show that reachability is satisfied in . Let π_i the policy that always plays action a_i. With this, we have that for any x'∈_2,
sup_π∈ d^π(x') ≥max_i∈[d] d^π_i(x'),
= max_i∈[d][2](x')^⊤[_1^⋆(_1,a_i)] ,
= max_i∈[d]{[𝕀{_1=}·[2](x')^⊤_1^⋆(_1,a_i)] +[𝕀{_1≠}·[2](x')^⊤_1^⋆(_1,a_i)] },
≥max_i∈[d]ρ() [2](x')^⊤_1^⋆(,a_i).
where the last inequality follows by the fact that, for all (x,a)∈_1×, [2](·)^⊤_1^⋆(x,a)=[2](x')^⊤ϕ_1^⋆(x,a) ≥ 0
(since [2](x')^⊤ϕ_1^⋆(x,a) is a conditional
density). On the other hand, from the construction of _1^⋆ and the fact that [2](x')∈^d_≥ 0, we have
max_i∈[d][2](x')^⊤_1^⋆(,a_i)=[2](x')_∞≥[2](x')/√(d).
Combining this with (<ref>) and using that ρ(x_0)=1/2
implies that reachability 1/(2√(d)) is satisfied in .
§.§ Relation to Other Reachability Assumptions
In this subsection, we show that <ref> is implied
by a notion of feature coverage used in the context of transfer
learning in Low-Rank MDPs <cit.>, as well as a notion of
explorability used in the context of reward-free RL in linear
MDPs <cit.>.
§.§.§ Feature Coverage
We first consider coverage condition used by <cit.>, which involves the second moments of the feature map .
We say that the linear MDP with featurization _h satisfies η-feature coverage if for all h ∈ [H],
sup_π∈λ_min(^π[(_h,_h)(_h,_h)^⊤]) ≥η.
We show that η-feature coverage implies
(η/2)^3/2-reachability. Thus, up to polynomial dependence,
η-feature coverage is a special case of <ref>.
Suppose that an MDP satisfies η-feature coverage as in <ref> for some η>0. If (x,a)∈(1) for all x,a, then the MDP satisfies (η/2)^3/2-reachability in the sense of <Ref>.
Let h∈ [H] and x∈_h+1 be given and define
θ(x)/(x).
To keep notation compact, we define _h ϕ_h^⋆(_h,_h). By η-feature coverage, there exists π∈ such that
η≤^π [(θ^⊤_h)^2] = ^π [𝕀{(θ^⊤_h)^2 < η/2}· (θ^⊤_h)^2] + ^π [𝕀{(θ^⊤_h)^2 ≥η/2}· (θ^⊤_h)^2] ,
≤η/2 + ^π [(θ^⊤_h)^2 ≥η/2],
where we have used that θ=1 and ϕ_h^⋆(x,a)≤ 1 for all (x,a)∈_h×. Rearranging (<ref>) and using that θ^⊤_h≥ 0 (it is a scaled conditional density), have
^π [θ^⊤_h ≥√(η/2)] = ^π [(θ^⊤_h)^2 ≥η/2] ≥η/2.
Now, by Markov's inequality, we have that
θ^⊤ϕ_h^⋆,π= ^π[θ^⊤_h] ≥√(η/2)·^π [θ^⊤_h ≥√(η/2)] ≥ (η/2)^3/2,
where we have once more used that θ^⊤_h≥ 0 almost surely.
§.§.§ Explorability
We now consider the explorability assumption of <cit.>, which involves the first moment of the feature map . This notion is defined as follows.
We say that a linear MDP satisfies η-explorability if for any h∈[H] and any θ∈^d∖{0} it holds that
sup_π∈ |θ^⊤^π[(_h,_h)]| ≥η·θ.
We now show that η-explorability is a special case of η-reachability:
Suppose that the explorability condition in <ref> is satisfied with η>0. Then, η-reachability is satisfied.
Let x∈_h+1 and define θ(x). By explorability, we have that
sup_π∈ d^π(x) = sup_π∈^π[(x)^⊤(_h,_h)],
= sup_π∈ |^π[(x)^⊤(_h,_h)]|, ((·)^⊤(x,a) is a condition law)
= sup_π∈ |θ^⊤^π[(_h,_h)]|,
≥η·θ , (by explorability)
= η·(x).
This shows that <ref> is satisfied with
parameter η.
|
http://arxiv.org/abs/2307.04388v1 | 20230710075157 | Core localized alpha-channeling via low frequency Alfven mode generation in reversed shear scenarios | [
"Zhiyong Qiu",
"Shizhao Wei",
"Tao Wang",
"Liu Chen",
"Fulvio Zonca"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Relieving the S_8 Tension: Exploring the Surface-type DBI Model as a Dark Matter Paradigm
Huanyuan Shan
August 12, 2023
=========================================================================================
A novel channel for fuel ions heating in tokamak core plasma is proposed and analyzed using nonlinear gyrokinetic theory. The channel is achieved via spontaneous decay of reversed shear Alfvén eigenmode (RSAE) into low frequency Alfvén modes (LFAM), which then heat fuel ions via collisionless ion Landau damping. The conditions for RSAE spontaneous decay are investigated, and the saturation level and the consequent fuel ion heating rate are also derived. The channel is expected to be crucial for future reactors operating under reversed shear configurations, where fusion alpha particles are generated in the tokamak core where the magnetic shear is typically reversed, and there is a dense RSAE spectrum due to the small alpha particle characteristic dimensionless orbits.
Energetic particles (EPs) as well as fusion alpha particles related physics <cit.> are key elements towards understanding the performance of future fusion reactors, among which two crucial topics are EPs transport loss by self-generated collective oscillations such as shear Alfvén wave (SAW) eigenmodes <cit.> and searching for alternative/complementary routes to transfer EP power to fuel ions, i.e., alpha-channeling <cit.>. Both processes are influenced by the saturation level and spectrum of SAWs. In this contribution, a channel for reversed shear Alfvén eigenmode (RSAE) <cit.> nonlinear saturation is proposed and analysed, which is expected to play significant roles in future reactor-scale tokamaks with rich spectrum of core-localized RSAEs <cit.> due to the reversed shear magnetic configuration and small dimensionless EP orbit size.
In this proposed process, a RSAE spontaneously decays into another RSAE and a low frequency Alfvén mode (LFAM), which can be ion Landau damped, leading to effective heating of thermal ions in the reversed shear region, and consequently, enhanced fusion performance.
We consider for simplicity low-β_i plasma such that the frequency separation between RSAE and LFAM required for resonant mode coupling can be well satisfied. The nonlinear coupling is dominated by thermal plasma contribution, while the RSAEs are excited by EPs, so the thermal plasma nonuniformity can be neglected, which is also consistent with the advanced scenario of reversed shear configuration.
The governing equations describing nonlinear interactions among RSAEs and LFAM with all predominantly SAW polarization can be derived from nonlinear gyrokinetic vorticity equation <cit.>
and quasi-neutrality condition,
with the particle response derived from nonlinear gyrokinetic equation <cit.>.
The general equation for three SAWs nonlinear interaction, with the matching condition being Ω_3(ω_3,𝐤_3)=Ω_1(ω_1,𝐤_1)+Ω_2(ω_2,𝐤_2), can be derived as
b_k_3ℰ_k_3δϕ_k_3 = - i/ω_3Λ^k_3_k_2,k_1[ (b_k_2-b_k_1)(1-k_∥1k_∥2V^2_A/ω_1ω_2) +b_k_3V^2_Ak_∥ 3/ω_3(k_∥ 1/ω_1 - k_∥ 2/ω_2) ]δϕ_k_1δϕ_k_2,
with ℰ_k≡ -k^2_∥ V^2_A/ω^2_k + 1 - ω^2_G/ω^2_k being the SAW dielectric function in the WKB limit, ω_G≡√(7/4+T_e/T_i) v_i/R_0 being the leading order geodesic acoustic mode frequency <cit.>, accounting for SAW continuum upshift and creation of beta-induced continuum gap, and Λ^k_k”,k'≡ (c/B_0)𝐛̂·𝐤”×𝐤' with 𝐛̂ being the unit vector along the equilibrium magnetic field 𝐁_0.
Equation (<ref>) describes the nonlinear evolution of SAWs, with Ω_3 modified by the beating of Ω_1 and Ω_2, the first term on the right hand side due to the competition of Reynolds and Maxwell stresses, and the second term from finite parallel electric field contribution to field line bending. Note that, since (ω_1+ω_2)≃ (k_∥ 1+k_∥ 2)V_A, Ω_3 naturally satisfies the SAW D.R. and can be strongly excited if it is a normal mode of the system, leading to significant spectral transfer of SAW turbulence.
We note that, in the expression of ℰ_k, effects of wave-particle interactions are not included, consistent with the k_∥v_i≪ω_k ordering for bulk non-resonant ions. However, finite Landau damping due to resonance with ions is crucial for alpha-channeling, and will be recovered formally in the later analysis by inclusion of the anti-Hermitian part of ℰ_k <cit.>.
§ PARAMETRIC DECAY OF RSAE
Equation (<ref>) will be applied to the nonlinear decay of a pump RSAE Ω_0(ω_0, 𝐤_0) into a RSAE sideband Ω_1(ω_1, 𝐤_1) and a LFAM Ω_B(ω_B, 𝐤_B), with the frequency/wavenumber matching condition Ω_0=Ω_1+Ω_B assumed without loss of generality.
For RSAE and LFAM being dominated by single-n and single-m mode structures, we take
δϕ_k=A_k(t)Φ_k(x) exp(-iω_k t+inξ-imθ), with A_k(t) being the slowly varying mode amplitude, Φ_k(x) the parallel mode structure localized about q_min with x≡ nq-m, and the normalization condition ∫ |Φ_k|^2 dx=1 is satisfied.
For the effective transfer of alpha particle energy to core ions, ω_B≤ O(v_i/(qR_0)), and thus, |ω_B|≪ |ω_0|, |ω_1| and k_∥ B≃ 0. Thus, the q_min surface also corresponds to the rational surface of Ω_B, i.e., Ω_B is the LFAM in the reversed shear configuration, as investigated theoretically <cit.>. We then have, ω_0≃ω_1 and k_∥0≃ k_∥ 1. Effects of small frequency mismatch on the decay process will be discussed later.
The nonlinear RSAE sideband and LFAM equations can be derived from equation (<ref>) as
b̂_1ℰ̂_1 A_1 = -i/ω_1⟨Λ^k_1_k_0,k_B^*α_1 Φ_1Φ_0Φ_B⟩_x A_0 A_B^*,
b̂_Bℰ̂_B A_B = -i/ω_B⟨Λ^k_B_k_0,k_1^*α_B Φ_BΦ_0Φ_1⟩_x A_0 A_1^*,
with α_1≡ (b_0-b_B)(1- k_∥ Bk_∥0V^2_A/(ω_0ω_B)) + b_1 V^2_A (k_∥ 1/ω_1 ) (k_∥ B/ω_B - k_∥0/ω_0), α_B ≡ (b_0-b_1)(1- k_∥ 1k_∥0V^2_A/(ω_0ω_1)) + b_B V^2_A (k_∥ B/ω_B ) (k_∥ 1/ω_1 - k_∥0/ω_0), ⟨⋯⟩_x≡∫⋯ dx denoting averaging over the fast radial scale, b̂_1ℰ̂_1≡∫Φ_1 b_1 ℰ_1Φ_1 dx being the Ω_1 eigenmode local dispersion function, and b̂_Bℰ̂_B being the local dispersion function for the LFAM eigenmode.
The parametric decay dispersion relation for RSAE decaying into another RSAE and LFAM can then be derived by combining equations (<ref>) and (<ref>)
ℰ̂_1ℰ̂_B^*≃(Λ̂^k_1_k_0,k_B^*)^2α̂_N/b̂_Bb̂_1 ω_Bω_1Ĉ^2 |A_0|^2,
with Ĉ≡⟨Φ_0Φ_BΦ_1⟩_x, Λ̂^k_1_k_0,k_B^*= ⟨Λ^k_1_k_0,k_B^*⟩_x, α̂_N≡α̂_1α̂_B, and Ĉ≃√(2 Δ_B/(√(π)Δ_0Δ_1)), with Δ_0∼Δ_1∼ O(1) and Δ_B∼ O(β^1/2) being the characteristic radial widths of the respective linear parallel mode structures.
Expanding ℰ̂_1≃ i ∂_ω_1ℰ̂_1(∂_t+γ_1)≃ (2 i/ω_1) (γ+γ_1) and ℰ̂_B^*≃ (-2i/ω_B) (γ+γ_B) in the local limit, with γ denoting the slow temporal variation of Ω_1 and Ω_B due to the parametric instability, and γ_1/γ_B being the linear damping rates of RSAE/LFAM accounted for by the anti-Hermitian part of ℰ_1/ℰ_B, one obtains
(γ+γ_1)(γ+γ_B)=(Λ̂^k_1_k_0,k_B^*)^2 α̂_N/4 b̂_B b̂_1Ĉ^2|A_0|^2.
The condition for the pump RSAE spontaneous decay can thus be obtained from equation (<ref>) as
α̂_N>0
and
(Λ̂^k_1_k_0,k_B^*)^2 α̂_N Ĉ^2|A_0|^2/(4 b̂_B b̂_1) >γ_Bγ_1 for the nonlinear drive overcoming the threshold due to Ω_1 and Ω_B Landau damping.
The nonlinear dispersion relation is very complex, and depends on various conditions including the polarization and mode structure of the three modes involved. For further analytical progress, the WKB limit and the strong assumption of k_∥ B→ 0 is adopted, and a parameter regime can be identified for the spontaneous decay process to strongly occur, which corresponds to k_⊥1≫ k_⊥0, such that (b_0-b_1)(b_0-b_B-b_1)>0; and α̂_N>0 can be satisfied with 1-k_∥0k_∥ 1V^2_A/(ω_0ω_1)>0, which generally requires Ω_1 being excited above the local SAW continuum accumulation point with n_1q_min< m_1.
The threshold condition for the RSAE spontaneous decay, for the proposed parameter region of RSAE “normal cascading" to |k_⊥1|≫ |k_⊥0|, can be estimated as
|δ B_⊥0/B_0|^2 > 4γ_1γ_B/ω_0ω_1k^2_∥0/k^2_⊥11/Ĉ^21/1-k_∥0k_∥ 1V^2_A/(ω_0ω_1)∼𝒪(10^-7),
and is comparable with or slightly higher than typical threshold condition for other dominant nonlinear mode coupling processes, e.g., ZS generation. This threshold amplitude, is also consistent with typical SAW instability intensity observed in experiments. Thus, this channel could be an important process in determining the nonlinear dynamics of RSAE.
§ NONLINEAR SATURATION AND CORE-LOCALIZED ION HEATING
The RSAE saturation level can be estimated by considering the feedback of the two sidebands to the pump RSAE, which can be derived from equation (<ref>) as
b̂_0ℰ̂_0 A_0≃ -i/ω_0Λ̂^k_0_k_1,k_Bα̂_0 Ĉ A_1 A_B,
with α_0= (b_1-b_B) (1- k_∥ Bk_∥ 1V^2_A/(ω_1ω_B)) + b_0 V^2_A(k_∥0/ω_0) (k_∥ B/ω_B- k_∥ 1/ω_1). The saturation level of LFAM, can be estimated from the fixed point solution of equations (<ref>), (<ref>) and (<ref>), and one obtains,
|A_B|^2= γ_0γ_1 b̂_0b̂_1ω_0ω_1∂_ω_1ℰ_1,ℛ∂_ω_0ℰ_0,ℛ/(α̂_0 α̂_1 |Ĉ|^2 (Λ̂^k_0_k_1,k_B)^2), and the ion heating rate due to LFAM Landau damping, can be estimated as
P_i=2γ_B ω_B∂ℰ_B,ℛ/∂ω_Bn_0e^2/T_ib̂_B |A_B|^2 ∼ 10^-3γ_0 n T.
The obtained core ion heating due to LFAM conllisionless damping, can be comparable to Coulomb collisional heating estimated by n T/τ_E, with τ_E being the energy confinement time.
This channel, achieved via the Landau damping of secondary LFAM, noting that k_∥ B≪1, is highly localized around the q_min surface (this conclusion can also be obtained, noting as the “secondary" LFAM structure will be determined by the primary RSAE, with a narrower extent than the primary RSAEs), will deposit fusion alpha particle power locally and heating core ions, leading to direct improvement of fusion performance in the tokamak center. The nonlinear dynamics of RSAE with multiple channels accounted for simultaneously <cit.> is crucial for the understanding of core plasma behaviour and fusion performance of future reactors.
10
AFasoliNF2007
Fasoli A, Gormenzano C, et al, 2007 Nuclear
Fusion 47 S264
LChenRMP2016
Chen L and Zonca F 2016 Review of Modern Physics 88 015008
NFischPRL1992
Fisch N J and Rax J M 1992 Phys. Rev. Lett. 69(4) 612–615
HBerkPRL2001
Berk H L, Borba D N, Breizman B N, Pinches S D and Sharapov S E 2001 Phys.
Rev. Lett. 87(18) 185002
TWangPoP2018
Wang T, Qiu Z, Zonca F, Briguglio S, Fogaccia G, Vlad G and Wang X 2018 Physics of Plasmas 25 062509
LChenJGR1991
Chen L and Hasegawa A 1991 Journal of Geophysical Research: Space
Physics 96 1503 ISSN 2156-2202
EFriemanPoF1982
Frieman E A and Chen L 1982 Physics of Fluids 25 502–508
NWinsorPoF1968
Winsor N, Johnson J L and Dawson J M 1968 Physics of Fluids 11
2448–2450
FZoncaPPCF1996
Zonca F, Chen L and Santoro R A 1996 Plasma Physics and Controlled
Fusion 38 2011
RMaPPCF2022
Ma R, Chen L, Zonca F, Li Y and Qiu Z 2022 Plasma Physics and Controlled
Fusion 64 035019
SWeiJPP2021
Wei S, Wang T, Chen N and Qiu Z 2021 Journal of Plasma Physics 87
905870505
SWeiNF2022 Wei S, Wang T, Chen L, Zonca F and Qiu Z, 2022 Nuclear Fusion 62 126038
|
http://arxiv.org/abs/2307.04894v1 | 20230710203340 | Noise in the direction of motion determines the spatial distribution and proliferation of migrating cell collectives | [
"Jonathan E. Dawson",
"Abdul N. Malmi-Kakkada"
] | physics.bio-ph | [
"physics.bio-ph",
"cond-mat.soft",
"physics.comp-ph"
] |
APS/123-QED
[Corresponding author:][email protected]
Department of Physics and Biophysics, Augusta University, Augusta, GA 30912, USA
A variety of living and non-living systems exhibit collective motion.
From swarm robotics to bacterial swarms, and tissue wound healing to human crowds, examples of collective motion are highly diverse but all of them share the common necessary ingredient of moving and interacting agents.
While collective motion has been extensively studied in non-proliferating systems, how the proliferation of constituent agents affects their collective behavior is not well understood. Here, we focus on growing active agents as a model for cells and study how the interplay between noise in their direction
of movement and proliferation determines the overall spatial pattern of collective motion.
In this agent-based model, motile cells possess the ability to adhere to each other through cell-cell adhesion, grow in size and divide.
Cell-cell interactions influence not only the direction of cell movement but also cell growth through a force-dependent mechanical feedback process.
We show that noise in the direction of a cell's motion has striking effects on the emergent spatial distribution of cell collectives and proliferation. While higher noise strength leads to a random spatial distribution of cells, we also observe increased cell proliferation. On the other hand, low noise strength leads to a ring-like spatial distribution of cell collectives together with lower proliferation.
Our findings provide insight into how noise in the direction of cell motion determines the local spatial organization of cells with consequent mechanical feedback on cell division impacting cell proliferation due to the formation of cell clusters.
Noise in the direction of motion determines the spatial distribution and proliferation of migrating cell collectives
Abdul N. Malmi-Kakkada
August 12, 2023
====================================================================================================================
§ INTRODUCTION
The importance of the coordination between
cell division and cell migration is recognized in multiple physiological processes, such as tissue regeneration, inflammation, as well as in pathological conditions, such as cancer metastasis <cit.>.
Because cell migratory and proliferation patterns determine how cells organize spatially over time, understanding the underlying biophysical mechanisms is crucial for our ability to direct spatial organization of cells in a customizable manner.
This has important implications
for understanding tissue regeneration and cancer invasion <cit.>.
With the emergence of multiplexed tissue imaging modalities that allow for quantification of cell proliferation at single-cell resolution <cit.>, it is now possible to determine how cell-cell interactions influence cell proliferation <cit.> from spatial map of single cells, together with higher-order relationships in space.
In a cell collective, spatial constraints due to crowding limits the space available to a cell due to the presence of neighboring cells and thus impose constraints on cell proliferation <cit.>.
Similarly, collective cell migration, a foundational collective behavior in living systems, involves
both the interaction of a cell with its environment as well as its neighbors <cit.>.
Fluctuations in the direction of a cell's motion
affects the spatial coordination of cells in a tissue <cit.>.
Despite the importance of cell-cell interactions, the relation between cell migration driven spatial organization and how it impacts cell proliferation due to physical constraints remains unclear.
Given that cells are active particles that transduce stored energy into mechanical motion, an interesting question that arises is how the coordination between cell migration and proliferation influences the spatial organization of cell collectives.
While cell growth, cell division, and cell migration are highly complex processes, involving a large network of intracellular signaling pathways <cit.>, here we focus on the
biophysical intercellular interactions that are known to play a key role in cell collective migration and proliferation <cit.>.
Mathematical and computational models of cell behaviors have contributed to a quantitative understanding of collective cell migratory behaviors and its underlying mechanisms <cit.>.
Pioneering work by Vicsek and co-workers showed that the collective dynamics of self-driven, or active particles emerge from a form of inter-particle coupling: a simple rule that an individual constituents' direction of motion is aligned with the average direction of motion of its neighbors <cit.>.
Both the number density of agents and noise in the direction of their movement are key parameters that regulate spatial patterns of collective motion.
Distinct from earlier studies, we focus on studying the coupling between noise in the directionality of cell migration and cell division.
The effect of cell division and cell death on collective cell movement has been studied in mean-field dynamical theoretical models <cit.> with recent experiments showing that cell growth and division can influence cell migratory behavior <cit.>.
Our recent work
in the context of freely expanding three-dimensional (3D) cell collectives <cit.> showed that the inter-cellular forces give rise to heterogenous cell motility patterns between the boundary and the interior of the cell collective.
In addition to cell-cell mechanical interactions, we anticipate that the noise in the cell movement direction may generate complex spatial distribution patterns with novel
implications on how cells divide.
To elucidate the role of noise on self-organization and proliferation in a migrating cell collective, we study a system of self-propelled particles with the capacity to proliferate, and whose motion is governed by local alignment rules.
Each cell can grow in size and divide upon reaching a critical size.
Cells in direct contact through cell-cell adhesion exert a force, which when exceeds a threshold inhibits cell growth and prevents cell division.
Such mechanical feedback on cell proliferation is in agreement with recently reported experimental observations <cit.>.
Cell division events in this model scramble the velocity orientation of dividing cells.
By combining mechanical and alignment cell-cell interactions with cell division events, our model is highly relevant to biological systems, such as cells, which possess an inherent capability to proliferate and migrate.
Our work provides insight into the fundamental features of expanding active matter.
Notably, we discover that noise in the direction of a cell's motion not only influences the spatial structure of cell collectives but also determines the ability of cells to proliferate.
§ MODEL DESCRIPTION AND SIMULATION DETAILS
Here we introduce the computational model we implemented to study the growth and migration of cell collectives in two-dimensions (2D).
The off-lattice agent-based model and the simulation scheme is adapted from our previous work on three-dimensional
tumor growth <cit.>.
Such off-lattice simulations are widely used to recapitulate experimentally observed features of individual cell dynamics within cell collectives <cit.>.
Individual cells are modeled as soft disk-like motile particles of radius R, Fig.<ref>A, which grow stochastically in time t, and, upon reaching a critical size, undergo division into two daughter cells.
In addition to its radius, R_i(t), the state of each cell i is characterized
by its position 𝐫_i(t) and direction of motion θ_i(t), Fig.<ref>A (Inset).
The dynamics of the proliferating and migrating cell collective is governed by the following three factors - (a) mechanical forces arising from two body interactions, (b) active processes due to cell growth, division, and death, and (c) active self-propulsion with directional noise together with neighbor interactions that align the direction of cell motion with its neighbors.
The model implementation of these factors is explained in detail below.
(a) Mechanical cell-cell interactions: Individual cells interact with short-ranged forces, consisting of two terms: elastic force (repulsion) and adhesion (attraction).
The elastic force, F_ij^el, between any pair of cells i and j of radii R_i and R_j
discourages spatial overlap between cells (Fig.<ref>B) and is given by <cit.>,
F_ij^el=h_ij^3/2/3/4(1-ν_i^2/E_i+1-ν_j^2/E_j)√(1/R_i(t)+1/R_j(t)),
where ν_i and E_i are the Poisson ratio and elastic modulus of the i^th particle. h_ij defined as max[0,R_i + R_j - |r⃗_i - r⃗_j|] is the virtual overlap distance between the
two cells <cit.>.
Biological cells adhere to their immediate physical neighbors through
cell adhesion molecules, Fig.<ref>(C).
The adhesive force, F_ij^ad, between a pair of interacting cells depends on the contact length between two cells, l_ij (see Supplemental Information SI-I for the analytical calculation of l_ij), and is given by <cit.>,
F_ij^ad=f^adl_ij1/2(c_i^recc_j^lig + c_j^recc_i^lig)
where,
and c_i^rec (c_i^lig) is the receptor (ligand) concentration
(assumed to be normalized with respect to the maximum receptor or ligand concentration so that
0 ≤ c_i^rec, c_i^lig≤ 1).
The coupling constant f^ad allows us to
rescale the adhesion force to account for the variabilities in the maximum densities of the receptor and ligand concentrations.
Both the elastic and the adhesive forces act along the unit vector n_ij, pointing from the center of cell j to the center of cell i.
The net force (F_i) on the i^th cell is the vectorial sum of the elastic and adhesive forces that the neighboring cells exert on it,
F_i=∑_j∈ NN(i) f_ij=∑_j∈ NN(i)(F_ij^el-F_ij^ad)n_ij
here, j is summed over the number of nearest neighbors NN(i) of cell i.
The nearest neighbors of cell i are all the cells that satisfy the criterion h_ij>0.
The net force due to finite area exclusion (elastic term) and cell-cell adhesion is dampened by an effective friction contribution which comes from (i) the interaction of a cell with the extracellular matrix (ECM), and (ii) cell-cell adhesion.
The friction that a cell i experiences is a time (t) dependent quantity given by,
γ_i(t) = γ_i^ECM(t) + γ_i^ad(t) .
The cell-ECM friction coefficient is assumed to be given by the modified Stokes relation,
γ_i^ECM(t)= μ R_i(t),
where, μ is the viscosity due to the ECM.
We consider additional damping of cell
movement due to adhesive forces given by,
γ_i^ad= ζ^max∑_j ∈ NN(i)(l_ij/2(1+𝐅_i·𝐧_ij/|𝐅_i|)×
1/2(c_i^recc_j^lig + c_j^recc_i^lig))
where, ζ^max is the adhesive friction coefficient and 𝐅_i is as defined in Eq.(<ref>).
Note that the added friction coefficient γ_i^ad is proportional to the cell-cell contact length l_ij, implying that the damping of cell movement due to this friction term is proportional to the number of cells that cell i is in contact with at time t.
(b) Cell proliferation: In our model, the cell number grows due to the imbalance between cell division and apoptosis.
At any point in time, cells are either in the growth (G) phase, i.e, the phase in which the cell area increases over time, or, in the dormant (D) phase, i.e., the phase in which cell area growth is arrested, Fig.<ref>D.
Whether a cell continues in the growth phase or enters the dormant phase is determined by the total force per unit length, due to the neighboring cells, acting on a cell at any given time point.
The total external force per unit length, p_i, that a cell experiences is calculated using,
p_i(t)=∑_j∈ NN(i)| f_ij· n_ij|/l_ij.
If p_i(t) on a cell i at any given time t is smaller than a threshold value, p_c, the cell grows in size, Fig.<ref>E-i.
However, if p_i(t) > p_c, the cell enters dormancy, Fig.<ref>D.
Hence, depending on the ratio of p_i (t)/p_c, cells can switch between the two states of dormancy and area growth.
A cell grows in size by increasing its radius in a stochastic manner sampled from a Gaussian distribution with the mean rate dR_i/dt= (2π R_i)^-1g_a, where g_a is the cell area growth rate given by,
g_a= π R_m^2/2τ.
Here, τ is the cell cycle time and R_m is the mitotic radius at which a cell divides (see Table I).
We assume that a cell divides into two daughter cells upon reaching R_m=5μm, giving rise to two identical daughter cells, each with radii R_d=R_m/√(2), ensuring area conservation, Fig.<ref>E-ii.
Hence, a key time scale in the simulation is τ - the average time it takes for a cell to divide, set to be ∼ 0.27hours. This is much faster than the typical cell cycle times of eukaryotic cells but comparable to cell cycle times of bacteria <cit.>.
As daughter cells are assigned completely random active velocity orientations, cell division events tend to scramble the orientational order of the cells.
Death of a cell takes place in the simulation leading to a randomly selected cell being removed from the collective, Fig.<ref>F.
The death rate is set to k_d=10^-20 s^-1. Owing to k_d << 1/τ, we are simulating a rapidly growing system of cells.
(c) Neighbor velocity alignment and fluctuation in the direction of motion:
The cell position, 𝐫_i(t), is described through the coordinates (x_i(t), y_i(t)). Cell
self-propulsion velocity is, v_i(t)=v_0s_i(t), where, v_0 is the cell migration speed, and s_i(t)=(cosθ_i(t), sinθ_i(t)) is the unit vector representing the direction of cell migration. The angle that the cell makes with the horizontal axis in the laboratory frame is θ.
Each cell in this model is endowed with motility that propels the cell in a given direction with a fixed speed v_0, Fig.<ref>G-i.
The directional alignment, and thus the overall direction of a cell's motion, is hampered by an angular white noise uniformly distributed in range ξ_i ∈ [-π/2, +π/2] with ⟨ξ_i^t⟩=0 and ⟨ξ_i^t ξ_j^t'⟩∼δ_ijδ_tt' and whose strength is given by η, Fig.<ref>G-ii. As the effective noise is given by ηξ_i,
η=0.2 means random fluctuations occur in the entire range [-π/10, +π/10], Fig.<ref>G-iii, whereas, η=0.01 results in random fluctuations in the range [-π/200, +π/200], Fig.<ref>G-iii.
The noise term represents fluctuations in the direction of a cell's motion. In biological systems, such as cells, there are many sources of such noise in the direction or orientation of cell movement.
Stochasticity intrinsic to cellular movement, such as due to limitations in cellular sensing or active shape remodeling during cell migration <cit.> are some examples.
In addition to the forces due to nearest neighbor mechanical interactions, as described in (a), each cell interacts with its neighbors in a manner that aligns its own velocity with that of its neighbors, Fig.<ref>G-iv.
The nearest neighbors which contribute to the velocity re-alignment of cell i are all those cells in the collective that satisfy the necessary condition |r_i(t)-r_j(t)|< R_a, where, | ...| is the vector magnitude, Fig.<ref>G.
We set R_a=10 μ m which limits
velocity re-alignment to occur with neighbors that are directly in contact with a given cell.
We then obtain the average orientation of the velocities of all the cells that satisfy the nearest neighbor criteria and assign that to the velocity orientation of cell i.
The cell velocity re-alignment with its neighbors influences its direction of motility, such that cells in a cluster tend to move in the same direction, Fig.<ref>G-iv.
Contact-based modulation of cell velocity is known to play a role in the collective migration of electrically stimulated cells <cit.>.
The complex dynamics of each cell in the collective involves active motility, area growth, division, and death. In the low Reynolds number limit, the equation of motion is fully described by the following update rules:
r^x_i(t+Δ t) = r^x_i(t)+v_0 cos(θ_i(t))Δ t+ F^x_i(t)/γ_i(t)Δ t
r^y_i(t+Δ t) = r^y_i(t)+v_0 sin(θ_i(t))Δ t+F^y_i(t)/γ_i(t)Δ t
θ_i(t+Δ t) = arg[∑_j ∈ | r_i(t)- r_j(t)|< R_a s_j(t)+ ∑_j∈ NN(i) f_ij]
+ ηξ_i(t) .
Eq.(<ref>-<ref>) describes the evolution of the x and y coordinates of a cell
i, governed by an active component that propels the cell with a speed v_0 in the direction θ_i(t) at time t and the net force on the cell due to its contacting neighbors.
We assume that the cell exerts a self-propulsion force which propels it with a constant effective active speed v_o. We note that an effective friction term is incorporated into the value of v_o.
Eq.(<ref>) describes the orientation dynamics of a cell i, where θ_i(t+Δ t) is the direction in which the cell moves in the next time step.
The net contribution to the direction of a cell's motility comes from, (i) orientation re-alignment, the first term on the right-hand side of Eq.(<ref>) and, (ii) the interaction forces (discussed in (a)), second term on the right-hand side of Eq.(<ref>).
As discussed in (c), the orientation re-alignment of a cell i's velocity is only due to nearest neighbor cells whose center lies within a distance of R_a (here 10 μ m) from the ith cell.
arg[ c] in the first term in Eq.(<ref>) refers to the angle associated with the vector c, if this is expressed in polar coordinates, and the sum is taken over all cells j within a distance of R_a of cell i (including cell i itself).
The net direction in which cell i moves is given by the angle associated with the net vector, which is obtained by vector addition of the velocity vectors of all neighboring cells which lie within the interaction radius R_a of cell i, and the net force F_i on the ith cell.
Initial Conditions: We initiated the simulations by generating 200 non-overlapping cells, randomly distributed in a circular region within a 2D spatial domain of size 250 μ m × 250 μ m. For all future time steps, we consider an open boundary condition.
Each cell is assigned an initial orientation of the active velocity, randomly distributed in the domain [0,2π].
Fluctuations around the direction of a cell's motion is captured by a noise term, which is randomly distributed with uniform probability in the range [-π/2, π/2].
The strength of the fluctuations is denoted by η (discussed in the previous section (c)).
In the present study, all the parameters are fixed except the noise
strength of velocity orientation switching η, which we vary from 0.01 to 0.2. The simulated cell aggregate is evolved to ∼ 10τ or
about 10,000s. Relevant parameters are shown in Table I. A fixed timestep of 5 s was used. We performed a numerical consistency check by
ensuring our results are invariant for a smaller timestep of 2.5s (see Supplemental Information SI-II). The
particle coordinates were recorded and used to calculate the dynamical observables relevant to the present study.
§ NOISE IN THE CELL MOTILITY DIRECTION CONTROLS THE SPATIAL DISTRIBUTION OF THE CELL COLLECTIVE
We first sought to understand how noise in the cell motility direction
determines the spatial distribution
of a growing cell collective.
The cell spatial distribution that we obtain at t= 10,000 s shows a strong dependence on the noise strength η, Fig. <ref>A, C.
For low noise strengths (η=0.01) cells are organized into multiple clusters that are spatially distributed in a roughly circular, ring-like pattern
Fig.<ref>A (see Supplemental Information SI-III for simulation movies).
The cells cluster into small groups mostly along the edge of the ring-like domain.
The domain interior is mostly devoid of cells, Fig.<ref>A.
By focusing on a single cluster (blue box in Fig.<ref>A), we observe that the constituent cells display highly coordinated motion, wherein each cell moves in roughly the same direction pointing radially outward, as seen from the blue arrows in Fig.<ref>A(inset), B.
At higher noise strength of η=0.2 the cell spatial distribution
changes from the ring-like structure to a diffuse morphology, characterized by randomized spatial distribution of cells, Fig.<ref>C (see Supplemental Information SI-III for simulation movie).
The cells organize into a large number of clusters of varying sizes scattered throughout the entire spatial domain occupied by the cells, Fig.<ref>C.
Individual cells within each cluster appear to move in a less coordinated manner, as compared to the case of low noise strength, Fig.<ref>C,D.
To better visualize the differences in the cell spatial distribution and
the cluster sizes at varying η, we represented the cell positional information using a density plot.
The entire spatial domain, in both x and y direction, is divided into 50× 50 bins of equal area.
The total number of cells within each bin is color-coded, with dark blue representing low number of cells and dark red representing the highest number of cells.
To generate the cell number density heat map, we combined 3 separate simulation results for each value of the noise strength, η=0.01, 0.05, 0.2, Fig.<ref>E-G.
The density plots show clearly the strong influence of the noise strength on the cell spatial distribution.
For low noise strength, η=0.01, the whole collective is spatially organized into a thin circular ring-like structure, with patches of high cell density visible at the border. The interior of the domain is characterized by low cell number density, Fig. <ref>E.
Cells organize themselves into coherently moving clusters with some of the larger clusters containing about 40-50 cells as seen in Fig. <ref>E.
At higher noise strengths of η=0.05 and 0.2, high cell density patches shift from being confined to the border of the ring-like pattern to its interior.
The number of cells within the high cell density patches decreases in a noise strength dependent manner. While 40-50 cells make up the high-density patches for η=0.01, ∼ 30 cells are visible for η=0.2.
The cell spatial distribution we observe is not a transient feature of the model.
Long-time simulations (upto t=25,000 s), for η=0.01 and η=0.2 (see Supplemental Information SI-IV) confirm that the cell spatial distribution
is preserved even after very long times.
We, therefore, conclude that the noise-dependent pattern of cell collective behavior is a robust feature of expanding cell collectives.
The velocity vector alignment of individual cells within a cluster, seen in
Fig.<ref>B and D, are indicative of collective behavior
seen in non-proliferating
self-propelled particles <cit.>.
To better understand the collective motion of individual cells, we measured the order in the motion of the entire cell collective (Fig.<ref>H).
We calculate the order parameter on the basis of position-dependent polarization of the cell velocity
by defining a vector pointing
from the center of mass of the cell collective to the individual cell position c_i = r_i - R_CM, where R_CM(t)=(1/N)∑_i r_i is the center of mass of the whole collective at time t.
c_i is directed outwards from the center of mass of the entire cell collective to the cell's position.
The angle ϕ_i between a cell's velocity vector, v_i, and its position vector with respect to the center of mass of the cell collective, c_i, can be calculated from
cos(ϕ)_i = c_i · v_i/(| c_i|| v_i|) (see Fig.<ref>I Inset).
The orientation order parameter for the whole cell collective at any given time t is defined as,
Φ(t) = 1/N(t)∑_i cos(ϕ(t))_i
where, N is the total number of cells at time t.
Φ can vary between 1 and 0 with
Φ=1 implying that the velocity orientation v_i of each cell in the whole cell collective is aligned with respect to the position vector c_i.
The time-dependent behavior of Φ(t) shows an initial almost linear increase over time which then saturates at a constant value at later times, Fig.<ref>H.
For very low noise strength of η=0.01, the order parameter saturates at ∼1, indicating a highly ordered outward cell motion.
This is consistent with our observation of
highly coherent and ordered cell movement such that cell velocity orientation s_i is aligned with the vector pointing outward towards the periphery of the cell collective, c_i. With increasing noise strength, the value of the order parameter progressively gets lower, indicating an increasingly disordered velocity direction.
The orientational order parameter at the final time point is shown in Fig.<ref>I, clearly decreasing with higher noise strengths.
Our result, showing the dependence of the order parameter on the noise strength,
also delineates why we obtain markedly distinct spatial distribution of cell collectives. While cells move consistently outwards at low noise
strengths leading to the emergence of a ring-like pattern, higher noise strengths result in randomized cell movement orientations that lead to a more diffuse spatial distribution of cells.
In general, our results map out the emergent spatial distribution of proliferating cell collectives.
§ NOISE IN THE CELL MOTILITY DIRECTION DETERMINES PROLIFERATION AND THE SPREAD OF CELL COLLECTIVE
Having observed angular noise-dependent differences in the spatial
distribution and the orientational order of cell collectives, we next ventured to ask how the noise influences cell division and the growth of the cell collective.
As spatial constraints can regulate cell cycle progression during tissue expansion <cit.>, we anticipate that noise-induced differences
in the cell spatial distribution will have an impact on the ability of cells to divide. Particularly, given that we incorporate mechanical feedback on cell division through the force term, noise-induced differences in local cell spatial arrangements could determine the ability of cells to divide.
To understand how noise in the cell velocity orientation affects the
proliferation of the cell collective, we looked at the temporal behavior of the total cell number and total spread area of the cell collective, for four different values of the noise strengths η=0.01, 0.05, 0.1, 0.2.
We quantified the spatial spread of migrating cell collective by calculating the radius of gyration squared,
R_g^2(t)=⟨1/NΣ_i=1^N [ r_i(t)- R_CM(t)]^2 ⟩.
The bracket ⟨ ... ⟩ denotes the ensemble average over 3 different simulation runs at each value of η.
The average squared distance of all the cells from the center of mass is an indicator of the spatial spread or invasion of a cell collective in two dimensions.
Small R_g^2 values indicate a smaller spatial spread of cells, with cells localized in close proximity to the center of mass.
In contrast, higher values of R_g^2 denote a wider spatial spread due to cells that are located farther away from the center of mass.
Both the total number of cells, N, and the total spatial spread of cells, R_g^2, steadily increase with time, Fig. <ref>A,B for a given value of noise strength.
In Fig. <ref>C,D, we show the N and R_g^2 at the final time point.
Surprisingly, at late time points N and R_g^2 show opposite trends as a function of the noise strength η, Fig. <ref>C,D.
The total cell number increases as the noise strength increases (see Fig. <ref>C), implying that stronger fluctuations in the direction of cell movement promote cell proliferation. At
t=10,000s, there are ∼3400 cells for η=0.2, while, N∼1900 at the lower noise strength (η=0.01), which is significantly lower compared to the case of η=0.2, Fig. <ref>C.
In contrast to the total number of cells, the total spatial spread of the cell collective showed an inverse dependence on the noise strength η. The spatial spread of the cell collective increases faster over time at lower noise strengths.
R_g^2 is an order of magnitude smaller at η=0.2 as compared to the lower noise strength of η=0.01, suggesting that as the noise strength increases the cell collective exhibit a more compact spatial distribution (see Fig. <ref>D).
The global quantities N and R_g^2 describe the time-dependent behavior of the whole cell collective and how it is influenced by noise in the cell
motion direction. Taken together with the analysis presented in the preceding section, our results show that increasing the noise strength disrupts cell-cell
velocity alignment, as reflected in the lower order parameter, but at the same time promotes cell proliferation, as reflected in the higher number of cells. On the other hand, lower noise strength facilitates cell-cell velocity alignment and
suppresses cell proliferation.
As collective behavior depends strongly on the number density of actively migrating agents <cit.>,
we next sought to understand how cell number density is affected by noise in the direction of cell motility.
Given that N is not fixed and that we impose an open boundary condition, number density is neither fixed nor clearly defined, as in the case of Vicsek model, but evolves over time.
Nevertheless, we can estimate the cell number density or the overall spatial packing of the cells using ρ(t)=N(t)/R_g(t)^2, where ρ is the cell density.
Due to the combined effect of cell proliferation and cell motility, both the total number of cells N(t) and the spatial spread R_g^2, evolve over time.
Consequently, cell number density exhibits a highly dynamic time-dependent behavior.
ρ(t) initially increases sharply for each value of noise strength, η=0.01, 0.05, 0.1 and 0.2, as shown in the time regime before the dashed line in Fig. <ref>E.
Following the initial rise, the temporal profile of the cell number density for noise strengths η=0.01, 0.05, 0.1 is markedly different from that for η=0.2, Fig. <ref>E.
For η=0.01, 0.05, 0.1, the cell number density decreases over time after the initial transient increase.
Whereas for η=0.2, the cell density continues to increase with time, although at a lower rate.
At longer times, cell number density is comparatively low for weaker noise strengths.
By singling out the cell number density at the final time point and plotting it as a function of the noise strength, we show that the final cell density rapidly increases with the noise strength Fig. <ref>F.
This dependence is rather surprising given our earlier results for the total number of cells as a function of noise strength. We expect higher proliferation to correspond to lower density, due to the role of cell contact force-dependent feedback on proliferation (p_i (t)) in our model.
When cells are tightly packed in space, we expect the compressing forces on cells from their neighbors to be higher <cit.>. This would hamper cell area growth, eventually leading to lower cell division events due to the force-dependent mechanical feedback term p_c.
Contrary to our expectations, high noise strength leads to a higher cell density and the cell collective has yet more number of cells (see Fig. <ref>A,C). To investigate this further, we turn to a more detailed quantification of the cell spatial arrangement on the basis of clustering analysis.
§ NOISE INCREASES THE NUMBER OF ISOLATED CELLS AND FACILITATES ENHANCED PROLIFERATION
To understand this rather counter-intuitive result of higher cell proliferation at higher cell number density, we used a spatial clustering algorithm DBSCAN (density-based spatial clustering of applications with noise) <cit.> to map out the structure of cell clusters within the collective.
The idea behind performing cluster analysis is that feedback due to the contact force from overlapping cells inhibits cell growth and hamper cell division. As such, single cells and cells with very few overlapping neighbors will be characterized by the highest proliferative capability.
On the other hand, we expect fewer cell division events when cells are part of a cluster with larger number of overlapping cells.
Therefore, we anticipate that the size of the cell clusters (i.e. the number of cells in a cluster) might hold the key to understanding why cells in a collective with higher global cell number density proliferate at a higher rate.
DBSCAN is a powerful tool for class identification of clusters in large spatial databases with noise.
For cluster identification and classification, DBSCAN requires two input parameters, namely, the maximum cell-cell distance ϵ [μm] to be considered as a cell's neighbor, and the minimum number of neighboring cells, n_min, that qualify as a cluster.
The DBSCAN algorithm initially labels each cell which has at least n_min number of cells within a distance of ϵ [μm] from its center as a core cell.
Any cell that has fewer than n_min number of cells within a distance of ϵ [μm] from its center is labeled as border cell.
All those cells which have no other cell in their neighborhood within a distance of ϵ [μm] from their center are labeled as single cells.
The algorithm then randomly picks a core cell and assigns it a cluster index.
The cluster is expanded sequentially, by adding cells which are in the neighborhood and within the distance of ϵ [μm] of the randomly picked core cell. In an iterative manner, DBSCAN algorithm labels each cell as being part of one of the clusters, with each cluster assigned a unique cluster index.
Since only overlapping cells exert growth inhibiting force on each other, we focused on identifying cell clusters of overlapping cells. Therefore, and since the typical cell radii in our model is 5μm, we chose ϵ=9μm, which means that cell-center-to-cell-center distance between any two cells within a cluster is 9μm or less.
This value of ϵ ensures that only overlapping cells form a cluster.
In order to cover the full range of cluster sizes we also set n_min=2.
Using MATLAB's in-built function for DBSCAN <cit.>, with the aforementioned values for the two input parameters (ϵ and n_min), we identified cell clusters from spatial coordinates of individual cells at the final simulatiom timepoint and for different noise strengths η, Figs. <ref> A,B. Each individual cell cluster in Figs. <ref>A,B is represented in a different color.
DBSCAN is a robust clustering method, allowing for the quantification of additional features of individual cell clusters.
Based on the cluster identity of each cell, we can quantify the center of mass and the radius of gyration of individual cell clusters, as shown using circles of different radii in Fig. <ref>(C).
Our analysis shows that the entire cell collective is spatially organized into cell clusters of different sizes i.e. cell clusters are composed of varying cell numbers. Since the total number of cells varies with the noise strength, in order to perform cluster number comparison across different values of noise strengths, we normalized the total cell cluster number at a given noise strength by the total number of cells at that noise strength.
The number of cell clusters at the final timepoint increases with the noise strength η, Fig.<ref> D.
The slight dip in the cell cluster number at the highest noise strength of η=0.2 is due to a lower total number of clusters at η=0.2 as compared to η=0.1, which indicates that clusters tend to disintegrate into isolated or single cells when the value of η is increased from 0.1 to 0.2.
To understand higher proliferation in cell collective with higher cell number density we turned our attention to isolated cells and cell clusters with less than 3 cells.
We found that the total number of both isolated cells and cell clusters with fewer than 3 cells increases with the noise strength η, Fig.<ref> E-F.
These results are robust with respect to the simulation time, see Supplemental Information SI-V for simulations run for much longer time t=25,000 s.
A higher number of isolated cells implies that more cells can proliferate, without the inhibitory effect of mechanical feedback on cell growth due to cell contact-dependent forces. This scenario is more conducive to cell division, allowing the cell collective to freely grow and divide.
Our DBSCAN-based cell cluster analysis reveals that even though the cell number density is comparatively higher at higher noise strengths, there are large numbers of isolated cells and clusters with fewer cell numbers. This leads to enhanced proliferation of individual cells.
In an expanding cell collective, cells form clusters as a result of either cell-cell adhesion and/or nearest neighbor velocity alignment.
As the noise strength increases, the tendency for these clusters to disintegrate or breakup increases, due to rapid fluctuations in the direction of migration. The isolated or smaller size clusters then proliferate at a higher rate, thereby increasing the total cell number even
though the overall number density of cells is higher at higher noise strengths. Hence, locally, due to the presence of more cells with fewer
neighbors, cells are able to grow and divide relatively unhindered by mechanical feedback. This accounts for the puzzling result where higher
overall cell density corresponds to higher cell proliferation.
§ DISCUSSION
The migratory pattern of motile cells is diverse and depends on factors such as whether it is a collection of isolated single cells moving in a uniform direction or a collection of adhesive cells which are physically in contact with each other <cit.>.
Here, we present an off-lattice agent-based computational modeling framework for an expanding 2D cell collective.
By focusing on the influence of noise in the direction of a cell's motion,
we show that noise strength influences: (i) the migratory pattern and spatial spread or invasion, and (ii) cell density-dependent cell proliferation of cell collectives.
While the seminal work of Vicsek and co-workers
has been in many ways foundational to computational modeling-based studies of cell migration <cit.>, few existing models of cell migration consider cell proliferation. Yet, the ability to grow and divide is a fundamental property of many biological systems.
Our model considers individual cells as active agents that can grow and divide, and whose movement is influenced by their interactions with other cells and stochastic switching in the direction of migration.
We take into account various biologically relevant inter-cellular interactions, such as cell elastic repulsion, and cell adhesion <cit.>. Adhesive interaction between cells, of the type prevalent in confluent tissues, has been taken into account in the past models <cit.>.
The model also includes an additional nearest-neighbor interaction through which cells tend to align the direction of their motion with the average direction of motion of all their neighbors <cit.>.
Given the recent experimental verification that cell proliferation is pressure-dependent <cit.>, mechanical feedback on proliferation is an important component of our model as the cell area growth depends on the net force acting on the cell from its contacting neighbors through the p_c term.
Hence, our model is an important extension of the classical Vicsek model, with self-propelled particles that can undergo growth, birth, and death.
We find that noise strength strongly influences the migratory pattern of cells in the collective.
At low noise strengths η=0.01 and at long times, the cells are sparsely distributed in a ring-like pattern.
Within this ring, the cells form clusters of different sizes.
Cells in each of these clusters move in a highly ordered manner, with the orientation of cell velocity aligned in the direction away from the center.
We quantified this ordered behavior of cell migration in the collective using an order parameter whose value for η=0.01 is close to 1, indicating a highly ordered motion of cells.
Cell division events in our model scramble the local order of the cell collective
as velocity vectors of the daughter cells are assigned random orientations upon division. However, even with these scrambling events present, we notice that the cell collective displays a highly ordered motion at low noise strengths.
At intermediate noise strengths (η=0.05-0.1), the spatial distribution of migrating cells still shows a ring-like pattern. Although higher density of cells is still confined to the outer ring, clusters and individual cells are to be found in the interior of this domain as well.
The orientation order parameter saturates to values much lower than 1 at long times, indicating the onset of a disordered migratory phase.
The lower value of the order parameter is due to the formation of smaller cell clusters that move in random directions.
As the noise strength is further increased to the highest value considered in this study (η=0.2), we observe a clear change in the migratory pattern and spatial arrangement of cells.
In this case, higher cell density is observed in the interior of the spatial domain over which cells are distributed.
The cell collective as a whole is split into multiple smaller clusters, with each cluster moving in random directions.
The order parameter for the cell collective for such high noise strengths approaches 0, indicating an almost total loss of orientational order in cell motion. Our results also show that noise strength not only influences the overall spatial pattern but the spread of
the cell collective as well, which is proportional to the total area covered by the cell collective. The largest spatial spread, compared to the size of the initial distribution of the collective, occurs for very low noise strengths at η=0.01. In this scenario, cells migrate as a propagating front leading to the emergence of a ring-like pattern. As the noise strength is increased, the spatial spread of the collective is strongly restricted.
An unexpected result of our study is that noise strength influences cell
proliferation. Although the total number of cells increases over time for all values
of noise strength, the trend in proliferation is strongly dependent on the noise strength.
The total number of cells is almost double the number of cells at the final time point for high noise strength η=0.2, as compared to η=0.01.
Combined with our results showing the effect of noise strength on the spatial spread of the cell collective, we find that cell number density is a highly dynamic
quantity that increases with noise strength.
Taken together, we show that as the noise strength increases, the density of the cell collective increases, whereas the orientational order decreases.
Given the mechanical feedback that limits proliferation due to cell-cell overlap, the increase of cell number with a higher density is a surprising and counter-intuitive result.
While the overall density indicates that cells should be more tightly packed at higher noise strengths, our DBSCAN-based cluster analysis shows that the local spatial structure is contrary to what is expected.
At higher noise strengths, not only do cells form more clusters, but there is a larger number of isolated cells. Isolated cells are ideal sources of proliferation in a collective, characterized by limited mechanical feedback on proliferation from neighboring cells. At lower noise strengths cell clusters contain a larger number of overlapping cells which thus inhibits cell growth and division. In this scenario, cells are localized to the periphery of a ring-like domain while its interior is mostly devoid of cells, leading to the overall density being lower.
Therefore, even though cell number density is greater at higher noise strengths, there is a larger number of proliferating cells due to the presence of smaller clusters and a greater number of individual cells that are not part of a cluster.
In conclusion, our study demonstrates that angular fluctuations in cell motility
direction can strongly determine the spatial distribution of growing cell collectives.
Our computational model provides a framework for studying the migration of cells in 2D growing cell collectives.
Our model combines cell velocity re-alignment, as introduced in the Vicsek model, with active growth and cell division.
This makes our work highly relevant in studying the migration behavior of biological cell collectives, in which cell migration occurs together with cell proliferation.
Our results imply that there are more, yet unexplained, dynamic behaviors that may emerge from investigating mechanical feedback on proliferation in a system of self-propelled particles undergoing collective motion.
§ ACKNOWLEDGMENTS
A.M.K acknowledge funding from startup grants.
The authors acknowledge the support of Augusta University High Performance Computing Services (AUHPCS) for providing computational resources contributing to the results presented in this publication.
|
http://arxiv.org/abs/2307.07546v1 | 20230714180000 | The Effect of Thermal Torques on AGN Disc Migration Traps and Gravitational Wave Populations | [
"Evgeni Grishin",
"Shmuel Gilbaum",
"Nicholas C. Stone"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.GA"
] |
[3]#2
[3]##3
|
http://arxiv.org/abs/2307.04478v1 | 20230710110110 | A closed form exact formulation of the spectral representation of a second-order symmetric tensor and of its derivatives | [
"Andrea Panteghini"
] | cs.CE | [
"cs.CE",
"cs.NA",
"math.NA"
] |
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling
Giuseppe Desolda10000-0001-9894-2116 Andrea Esposito10000-0002-9536-3087 Florian Müller20000-0002-9621-6214
Sebastian Feger20000-0002-0287-0945
August 12, 2023
====================================================================================================================================================
The spectral decomposition of a symmetric, second-order tensor is widely adopted in many fields of Computational Mechanics. As an example, in elasto-plasticity under large strain and rotations, given the Cauchy deformation tensor, it is a fundamental step to compute the logarithmic strain tensor.
Recently, this approach has been also adopted in small-strain isotropic plasticity to reconstruct the stress tensor as a function of its eigenvalues, allowing the formulation of predictor-corrector return algorithms in the invariants space. These algorithms not only reduce the number of unknowns at the constitutive level, but also allow the correct handling of stress states in which the plastic normals are undefined, thus ensuring a better convergence with respect to the standard approach.
While the eigenvalues of a symmetric, second-order tensor can be simply computed as a function of the tensor invariants, the computation of its eigenbasis can be more difficult, especially when two or more eigenvalues are coincident.
Moreover, when a Newton-Rhapson algorithm is adopted to solve nonlinear problems in Computational Mechanics, also the tensorial derivatives of the eigenbasis, whose computation is still more complicate, are required to assemble the tangent matrix.
A simple and comprehensive method is presented, which can be adopted to compute a closed form representation of a second-order tensor, as well as their derivatives with respect to the tensor itself, allowing a simpler implementation of spectral decomposition of a tensor in Computational Mechanics applications.
§ INTRODUCTION
This paper presents important developments regarding the eigenvalues and eigenvectors of a symmetric second-order tensor and the determination of the associated basis required for its spectral representation.
The results here presented apply to situations involving isotropic scalar-valued functions and isotropic tensor-valued functions of a symmetric second-order tensor.
For instance, the finding of this article are useful for the integration of constitutive laws of isotropic materials and in finite deformations (e.g., to compute the the logarithmic strain tensor from the displacement gradient).
The numerical integration of isotropic elasto-plastic constitutive laws can be more efficiently carried out by formulating the return algorithms in terms of eigenvalues of the elastic strain tensor (e.g. Borja et al. <cit.> and de Souze Neto et al. <cit.>), or in the invariants elastic strain space <cit.>, <cit.>. Differently from the standard approach <cit.>, an invariant-based return algorithm allows the correct handling of stress states in which the plastic normals are undefined.
These two integration algorithms require the spectral representation of the stress, as well as the determination its derivatives to assemble the stiffness matrix. Unfortunately, their determination using the approach described in the literature is very cumbersome (see e.g., De Souza Neto et al. <cit.>, Borja et al. <cit.>), particularly when two or three eigenvalues coincide. This key aspect certainly makes these invariant-based integration algorithms, even if more and more efficient, less attractive with respect to standard return algorithms formulated in terms of full tensorial components.
About the applications in large strain theories, to avoid the complexity of the standard procedure, commercial codes (e.g. SIMULIA Abaqus <cit.>) often employ approximate formulations to numerically integrate the logarithmic strain in finite deformation analyses. Some Authors suggest, for specific isotropic functions, to resort to their numerical approximation based on series expansion (e.g. Ortiz et at. <cit.>, de Souza Neto <cit.>, Hudobivnik et al. <cit.>).
However, it should be noted that these series-based procedures, even if simpler and numerically efficient, can be hardly adopted when the isotropic functions are not known explicitly (i.e., for instance, in the case of the integration of the isotropic elastoplastic materials described above).
The writer has later discovered that Odgen <cit.> incidentally describes, in an exercise contained in his book, a very important result, which to the best of his knowledge, seems to have been missed by the vast majority of the research community. He suggests a very simple method for retrieving a closed-form expression for the basis of the spectral decomposition of a second-order tensor which does not require the computation of the originating eigenvectors.
This result has later been reported also by Miehe <cit.>, who however states that " the formulation above is restricted to the case of distinct eigenvalues of the tensor". Moreover the same Author <cit.> points out that such an approach requires the inversion of the second-order tensor, which severely restricts the applicability of the method. De Souza Neto et al. <cit.> describe a very cumbersome method to evaluate both the basis and their spin. They also state that "...a methodology similar to that adopted here was introduced by Miehe (1993, 1998a), where a particularly compact representation for the function derivative is used. However, the compact representation allows only the computation of the derivative at invertible arguments and cannot be used...".
In this paper it is mathematically shown that indeed the basis required for the spectral representation of a symmetric second-order tensor can be derived without the computationally expensive evaluation of the associated eigenvectors. It is also shown that this can also be directly derived from the secular (or characteristic) equation of the tensor, without any assumptions about the invertibility of the second-order tensor. Most importantly it is clarified how the result can be particularized to the case of two and three coinciding eigenvalues, hence removing the strong limitation of the approach described by Miehe <cit.>, <cit.> which de facto prevents the application of this extremely useful result. This paper also provides the tensor derivatives of the basis, i.e. its spin.
Moreover, it is presented a simple and generic approach to compute the spectral representation of isotropic tensor-valued functions, as well as their derivatives with respect to the tensor variable itself. The proposed procedures can be practically adopted in computational mechanics since all limitations of the procedures available in the literature have been removed (the approach of De Souza Neto et al. <cit.> does not have such limitations but is laborious to implement). Finally two applications are presented for isotropic elasto-plasticity and for the evaluation of the logarithmic strain tensor in finite deformations.
§ EIGENVALUES, EIGENVECTORS AND SPECTRAL REPRESENTATION OF A SYMMETRIC, SECOND-ORDER TENSOR
Given the symmetric, second-order tensor , its (ordered) eigenvalues λ_i and their corresponding eigenvectors n_i are obtained
by solving the eigenvalues-eigenvectors problem <cit.>:
( -λ I )n = 0 n^Tn=1
being I the second-order identity tensor.
The principal components λ_i can be obtained by solving the third-order scalar equation in λ, namely the secular equation:
λ^3 -I_1 λ^2 + I_2 λ - I_3 = 0
The coefficients
I_1 =
I_2 = 1/2(I_1^2 - :^T)
I_3 = ()
are the invariants of , since their values do not depend on the reference system in which is expressed. The three ordered solutions of Eq. (<ref>) are the eigenvalues of the problem described in Eq. (<ref>). As explained in <cit.>, they can be computed in closed form as:
λ_I=I_1/3+2/√(3)√(J_2)sin(θ+ 2/3π)
λ_II=I_1/3+2/√(3)√(J_2)sin(θ)
λ_III=I_1/3+2/√(3)√(J_2)sin(θ- 2/3π)
where
J_2 = 1/2:
J_3 = ()
are the invariants of the second-order, deviatoric symmetric tensor = - I_1/3 I, and the Lode's angle θ is defined as
θ = 1/3arcsin( - √(27)/2J_3/√(J_2^3))
where -π/6 ≤θ≤π/6.
It is well known that the second-order symmetric tensor T can be expressed as a function of its eigenvalues λ_i and the corresponding eigenvectors n_i by resorting to the spectral theorem[
Let consider that, unless otherwise specified, it is always intended
∑ f_i = ∑_i=I,II,III f_i
]:
T = ∑λ_i n_i ⊗n_i = λ_i N_i
where N_i is the eigenbasis of T related to λ_i.
§ CLOSED-FORM EXPRESSION FOR THE EIGENBASIS OF T
We will consider three cases, as a function of the multiplicity of the eigenvalues λ_i:
* λ_I> λ_II >λ_III
* λ_I> λ_II=λ_III or
λ_I= λ_II>λ_III
* λ_I= λ_II=λ_III
Let observe that the number of non coincident eigenvalues, i.e., the the eigenvalues multiplicity can be simply determined from the invariants of T. Hence,
case (i) occurs when J_2≠0 and θ±π/6, the case (ii) implies J_2≠0 and θ=±π/6, and finally the case (iii) requires that J_2=0 (while θ is undefined).
§.§.§ A general property of the eigenbasis N_i.
We will initially prove that it results:
∑N_i = I
Let consider that the i-th eigenvalue and eigenvector of will satisfy Eq. (<ref>), i.e.
n_i= λ_in_i
Since n_i is a unit vector, it results
n_i^T n_i =
: ( n_i ⊗n_i)= λ_i ( n_i^T n_i)= λ_i
one can compute the first invariant I_1 in the principal coordinate system as
I_1==:I=∑λ_i = : ∑N_i
From this equation it must result
:I = : ∑N_i
This conditions yields
∑N_i = I
§.§.§ Case (i): λ_I> λ_II >λ_III.
One will prove that the spectral theorem
=∑λ_i (n_i ⊗n_i ) = ∑λ_i N_i
can be written as
= ∑λ_i λ_i
i.e., we will prove that it simply results[It should be noted that, to the best of the Author's knowledge, this result appears for the first time, without any demonstration or explanation in Ogden's book <cit.>.
It has been used by Mihe <cit.>, <cit.>, but, as explained in the Introduction, due to the limitations of his approach, it seems it is not commonly adopted in Computational Mechanics.
]:
N_i= λ_i
By considering the symmetry of T, the derivatives of the invariants I_1, I_2 and I_3, defined by Eq. (<ref>), (<ref>) and (<ref>) with respect to are:
I_1=I
I_2=I_1 I-
I_3=I_3 ^-1 =
where denotes the adjugate matrix of .
By substituting the property (<ref>) and the spectral theorem (<ref>) into Eq. (<ref>) and (<ref>) respectively, one obtains:
I_1=∑N_i
I_2=I_1 I-∑λ_i N_i
Finally, by resorting to the spectral theorem (<ref>), one can write (<ref>) as
[
Let observe that, by multiplying Eq. (<ref>) by = I_3 ^-1 one obtains
I_3 ^-1n= λ I_3 ^-1n
which gives
n = I_3/λn
Hence, the eigenvectors n of and are coincident, whilst the i-th eigenvalue μ_i of associated to n_i can be computed from λ_i as:
μ_i= I_3/λ_i= (λ_jλ_k)_i j k
The spectral representation of is then:
= ∑( λ_jλ_kN_i )_i j k
]
I_3= ∑(λ_jλ_kN_i )_i j k
Let consider now that the value of I_1, I_2 and I_3 are independent with respect to the reference systems, hence one can compute them also in terms of principal components. It result:
I_1= λ_I+λ_II+λ_III
I_2= λ_Iλ_II+ λ_Iλ_III +λ_IIλ_III
I_3 = λ_Iλ_IIλ_III
The derivatives of the invariants I_1, I_2 and I_3 can also be computed by differentiating these last three expressions, observing that λ_i=λ_i(). It results:
I_1=λ_I+λ_II+λ_III =∑λ_i
I_2=I_1∑λ_i-∑λ_iλ_i= I_1 I -∑λ_iλ_i
I_3 = λ_IIλ_IIIλ_I
+
λ_Iλ_IIIλ_II
+
λ_Iλ_IIλ_III
=∑(λ_jλ_kλ_i)_i j k
One can now compute the eigenbasis N_i as a function of the derivatives of the eigenvalues λ_i with respect to by solving the linear system of equations obtained by equating Eq. (<ref>), (<ref>) and (<ref>) with Eq. (<ref>), (<ref>) and (<ref>) respectively.
One obtains
{ ∑N_i = ∑λ_i
∑λ_i N_i = ∑λ_i λ_i
∑(λ_jλ_kN_i )_i j k=∑(λ_jλ_kλ_i)_i j k.
which, under the assumption λ_I> λ_II >λ_III[
Let observe that the determinant of the matrix of the system (<ref>) reads:
[ 1 1 1; λ_I λ_II λ_III; λ_IIλ_III λ_Iλ_III λ_Iλ_II ]
= - (λ_I-λ_II)(λ_I-λ_III)(λ_II-λ_III)
It is always nonzero if λ_I> λ_II >λ_III.
] simply gives
N_i = λ_i
so that the spectral theorem (<ref>) can be re-written as:
= ∑λ_i (n_i ⊗n_i ) = ∑λ_i λ_i
§.§.§ Case (ii): λ_I> λ_II=λ_III or
λ_I= λ_II>λ_III.
If one or more eigenvalues are coincident of , then the linear system (<ref>) will not admit a unique solution.
Let λ̂ be the non-repeated eigenvalue of and N̂ the correspondent eigenbasis.
The first invariant I_1 is equal to:
I_1 = λ̂+ 2 λ_II
so that, it results:
λ_II= 1/2(I_1 - λ̂)
Eq. (<ref>) can be rewritten as:
N̂ + 2 N_II = I
hence, it results:
N_II = 1/2(I -N̂)
The spectral theorem can be rewritten as:
=λ̂N̂ + 1/2(I_1-λ̂) (I -N̂)
=
3/2(λ̂- I_1/3) N̂+ 1/2( I_1- λ̂) I
Eq. (<ref>) can be further simplified by computing the deviatoric part l̂ of λ̂ as l̂=λ̂-I_1/3. One obtains
= I_1/3I+ 3/2l̂(
N_j -1/3I)
This last equation clearly shows that, when two eigenvalues are coincident, the deviatoric part of N̂, defined as N̂^d= N̂ -I/3, is simply proportional to the deviatoric part of the tensor , i.e.
N̂^d = 1/λ̂-λ_IIt = ∓1/qt θ=±π/6
where q=√(3 J_2).
This result is a consequence of the multiplicity of the deviatoric principal components.
When two eigenvalues of T coincide, the two coincident deviatoric principal components result to be
minus half of the (only) independent one, since their sum must vanish.
Eq. (<ref>) results to be the sum of two independent terms: the volumetric and the deviatoric parts. The basis of the volumetric part is obviously proportional to the identity tensor I, whilst that of the deviatoric part can only be proportional to the tensor itself.
= I_1/3I+ 3/2l̂N̂^d
It should be noted that, as in Case (i), it is still possible to demonstrate that
N̂=λ̂
To prove this result, let compute the second invariant J_2 of the deviatoric tensor as a function of the principal component λ̂:
J_2= :/2
=(λ̂-I_1/3 )^2 + 2 (λ_II -I_1/3 )^2 /2
= 3 (λ̂-I_1/3 )^2 /4
By differentiating this expression with respect to , one obtains
J_2= = - I_1/3I=3/2(λ̂-I_1/3) (λ̂- 1/3I)
so that, solving for one obtains:
= 3/2(λ̂- I_1/3)λ̂+ 1/2( I_1- λ̂) I
By equating this last expression with Eq. (<ref>) and solving for N̂[This can be done under the condition λ̂ I_1/3 that, observing Eq. (<ref>) is equivalent to J_20]
one obtains:
N̂ = λ̂
§.§.§ Case (iii): λ_I= λ_II=λ_III.
Finally, let consider the case of three coincident eigenvalues λ=λ_I= λ_II=λ_III. The tensor is purely volumetric in any reference system. By observing that it results l_i = 0 ∀ i and I_1=3λ, Eq. (<ref>) simply becomes
= λI
From Eq. (<ref>) it results
N_I=N_II=N_III=1/3I
§ COMPUTATION THE EIGENBASIS DIRECTLY FROM THE SECULAR EQUATION
Since the three eigenbasis are equal to the derivatives of its conjugate principal components with respect to the tensor , one can determine them by simply differentiating Eqs. (<ref>) with respect to . Using the chain rule, one obtains:
N_i = λ_i
= 1/3I+ √(3)/3( sinβ_i/√(J_2)J_2+2 J_2 cosβ_i θ)
where β_I= θ+2/3 π, β_II= θ, β_III= θ-2/3 π, and
[
It should be noted that Eq. (<ref>) requires the computation of ^-1. An expression more suitable for the implementation is
θ=-1/cos 3 θ(√(3)/2 √(J_2^3)J_3+ √(3)/6 √(J_2)I + sin 3θ/2 J_2)
where
J_3=
[ s_yy s_zz-s_yz^2 s_xz s_yz- s_xy s_zz s_xy s_yz - s_xz s_yy; s_xz s_yz- s_xy s_zz s_xx s_zz -s_xz^2 s_xy s_xz - s_yz s_xx; s_xy s_yz - s_xz s_yy s_xy s_xz - s_yz s_xx s_xx s_yy - s_xy^2 ]
that is undefined only for J_2=0 or θ=±π/6
]
J_2=
θ=1/cos 3 θ(sin 3θ/3^-1 - √(3)/6 √(J_2)I - sin 3θ/2 J_2)
The computation of the spin of the eigenbasis, i.e. N_i
is even more tiring.
A more elegant and simpler approach can be obtained by working directly on the secular equation (<ref>). Each of the eigenvalues λ_i will satisfy Eq. (<ref>), i.e.
f()= λ_i^3 - I_1 λ_i^2 + I_2 λ_i- I_3 =0
hence, it must result
f()=[(3 λ_i^2 - 2 I_1 λ_i + I_2 ) λ_i- Iλ_i^2 .
. + (I_1 I - )λ_i + I_3 ^-1]: =0 ∀
This imply the condition:
(3 λ_i^2 - 2 I_1 λ_i + I_2 ) λ_i- Iλ_i^2 + (I_1 I - )λ_i
+ I_3 ^-1=0
The eigenbasis N_i can be obtained by simply solving this last equation of λ_i. By observing that J_2=1/3 I_1^2-I_2, after some simple algebraic manipulation, one obtains[
A very compact way to write this derivative is I_3 ^-1. However, it should be noted that it is not completely correct from a formal point of view, since it is undefined when I_3=0. The invariant I_3, being defined as , is simply the adjugate matrix of , that is always defined. In simpler words, being I_3=
a third degree polynomial in _ij, its derivative with respect to is always defined. It results:
I_3=
=[ _yy_zz-_yz^2 _xz_yz- _xy_zz _xy_yz - _xz_yy; _xz_yz- _xy_zz _xx_zz -_xz^2 _xy_xz - _yz_xx; _xy_yz - _xz_yy _xy_xz - _yz_xx _xx_yy - _xy^2 ]
Eq. (<ref>) becomes
N_i = λ_i = λ_i [(λ_i-I_1 ) I+ ]+I_3/J_2 (4 sin^2 β_i -1)
]:
N_i = λ_i = λ_i [(λ_i-I_1 ) I+ ]+I_3^-1/J_2 (4 sin^2 β_i -1)
The spin of the eigenbasis can be obtained by differentiating Eq. (<ref>) by the tensor . One obtains
N_i=
^2λ_i/⊗
=1/J_2 (4 sin^2 β_i -1)[ - 4 √(3 J_2)sinβ_i (N_i⊗N_i) .
.
+
(2 λ_i -I_1 ) (N_i⊗I
+ I⊗N_i).
.+ (N_i⊗
+ ⊗N_i)
+
λ_i (I -I⊗I)
+ ^2 I_3/⊗]
where I is the fourth-order identity tensor and
^2 I_3/⊗ = δ_jk_il+ _jkδ_il
being δ_ij the Kroneker delta operator.
Let note that, even in the case of two coincident λ_i, the spin of the basis associated to the non-repeated eigenvalue λ̂ can still be computed using Eq. (<ref>). It is the only spin required to compute the derivative of Eq. (<ref>).
However, by exploiting the proportionality between the deviatoric part of the tensor and the basis itself, it can be simpler obtained by means of Eq. (<ref>).
As explained in the previous section, when all the eigenvalues coincide, the three eigenbasis N_i are simply equal to I/3. Their spin is not defined, but, as explained in the next section, it is still possible to evaluate the derivative of the spectral representation of the tensor when its invariants are isotropic functions.
§ ISOTROPIC FUNCTIONS
In many mechanical applications it is a priori known that two second-order, symmetric tensors S and T share the same principal directions. Under these conditions, the two tensors are called co-axial.
These applications usually involve isotropic tensor functions, i.e., the invariants of the tensor T are function of the those of the tensor S.
In these applications, once the principal components η_i of the tensor S are computed as a function of those of T, say λ_i it is finally required to compute the Cartesian components of S.
Let S be a symmetric, second-order tensor, co-axial with .
Let assume that the generic eigenvalues η_i( λ_I, λ_II, λ_III) of S can be computed as a function of the eigenvalues λ_i of .
Since S and are co-axial, they will share the same eigenbasis N_i and it results
S=∑η_i ( λ_I, λ_II, λ_III)N_i
Since it results N_i ⊗N_j=0 for i j, the derivative of this expression with respect to the tensor T will be
S = ∑η_iλ_iN_i ⊗N_i + η_i
N_i
Let consider the case in which two eigenvalues λ_i of coincide.
As explained in the section above, under this condition it results that the deviatoric part of S, say s, results to be proportional to the deviatoric part of T, say t. Hence, one can compute S as
S= I_1S/3I+ q_S/q_Tt
where I_1T=T is the first invariant of T, q_T=√(3/2t:t), I_1S=S=I_1S(I_1T,q_T) and q_S=√(3/2s:s)=q_S(I_1T,q_T).
Let now compute S. Since s and t are simply proportional, it must result
θ_S=θ_T
and then
θ_Sθ_T=1
Moreover, considering that
Eq. (<ref>) gives:
q_S(θ_S)= √(-27/2J_3S/sin (3 θ_S))
it results
q_Sθ_T=qθ_Sθ_Sθ_T
= 3√(4)/2J_3Scos (3 θ_S) √(sin^2 (3 θ_S))/√(J_3S^2)sin^2 (3 θ_S)= 0
θ_S=θ_T=±π/6
Analogously
I_1Sθ_T=I_1Sθ_Sθ_Sθ_T=0 ∀θ_T
Hence, observing that from Eq. (<ref>) it results that
q_TT = 3/2 q_Tt = ∓3/2N̂^d
θ_T=±π/6
by differentiating Eq. (<ref>) with respect to T one obtains:
S= 1/3I_1SI_1TI⊗I∓1/2I_1Sq_TI⊗N̂^d
+ q_S/q_T(ℐ - 1/3I⊗I)
+ 3/2(q_Sq_T
- q_S/q_T) N̂^d ⊗N̂^d
∓q_SI_1TN̂^d ⊗I θ_T = ±π/6
where ℐ is the fourth-order identity tensor.
Finally, when all the eigenvalues coincide, Eq. (<ref>) reduces to:
S= I_1S/3I
whilst it derivative can be computed by particularizing Eq. (<ref>).
By observing that when t→0, N_i →I/3, so that its deviatoric part N̂^d→0.
Observing that q_S → 0 when q_T → 0, using a Tayor expansion for q_T → 0, it will result:
q_S(I_1T, 0)≈q_Sq_T q_T
so that q_S/q_T →q_Sq_T, and finally:
S= 1/3I_1SI_1TI⊗I+ q_Sq_T(ℐ - 1/3I⊗I)
§ APPLICATIONS
§.§ Isotropic elastoplastic materials under small-strains and displacements
Let consider a generic elastoplastic isotropic material, in which the principal directions of
the elastic strains and of the stress coincides.
Let be s the deviatoric part of the Cauchy stress tensor σ, and
p=1/3σ
q=√(3/2s:s)
θ_σ=1/3arcsin( -27/2s/q^3)
the stress invariants, i.e.
the hydrostatic pressure, the equivalent von Mises stress, and the stress Lode's angle respectively.
In a general backward Euler integration scheme, let be ^* and Δ^p the elastic strain predictor and the plastic strain increment respectively.
The plastic strain increment can be computed as a function of an isotropic plastic potential g(p,q,θ_σ) as
Δ^p= g(p,q,θ_σ)σΔγ
where Δγ is the plastic multiplier. Since g(p,q,θ_σ) is an isotropic function of σ, its derivative respect to σ will be co-axial with the stress <cit.> <cit.>.
Then, since the elastic strain ^e is co-axial with σ for the assumption of isotropy, it results that also
^*=^e+Δ^p
is co-axial with σ. For these reasons, the principal directions of stress are a priori known, being coincident with those of the predictor ^*.
Let e^* the deviatoric part of the elastic predictor ^*, and
_v^*=^*
_q^*= √(2/3e^*:e^* )
θ^*_=1/3arcsin( - 4 e^*/_q^*3)
the its invariants, i.e. the volumetric strain predictor, the equivalent von Mises strain predictor, and the strain predictor Lode's angle.
In general, if a standard return algorithm in the full tensorial space is employed, numerical problems and convergence difficulties can arise when two or more eigenvalues coincide.
Instead, p, q, θ_σ can be more easily computed formulating a return algorithm in the invariants strain space <cit.>. Once p, q and θ_σ have been obtained as a function of the strain invariants predictor, it is necessary to compute the stress tensor σ.
If _q^* 0 and |θ^*_| π/6, one can compute the stress tensor from its invariants and from the eigenbasis N_i^* of the elastic strain predictor ^* by resorting to the
spectral theorem. It results
σ= ∑[p(_v^*,_q^*,θ^*_)
+ 2/3 q(_v^*,_q^*,θ^*_) sinβ_i (_v^*,_q^*,θ^*_)] N_i^*
where
β_I = θ_σ (_v^*,_q^*,θ^*_) + 2/3π
β_II = θ_σ(_v^*,_q^*,θ^*_)
β_III = θ_σ (_v^*,_q^*,θ^*_) - 2/3π
and N_i^* is computed from Eq. (<ref>) as a function of the invariants of ^* and its principal components.
The consistent jacobian matrix[It should be noted that this general approach has been recently adopted by the Author in <cit.>, while in his older work <cit.>, in order to avoid the computation of the spin of the eigenbasis, the spectral representation of the stress was computed as a function of the eigenvectors of the strain predictor, while jacobian matrix was obtained by means of a "simplified" procedure based on the inversion of a 6x6 matrix. Unfortunately, this procedure is model-specific and requires the smoothness in the deviatoric plane of the yield function and of the plastic potential.] can be computed from Eq. (<ref>) as
σ^*=∑[p+ 2/3 q sinβ_i ]
N_i^*^*
+
N_i^*
⊗{[
p_v^*
+2/3(q_v^*sinβ_i +
q θ_σ_v^*cosβ_i
)
]I.
.
+2/3_q^*[p_q^*
+2/3(q_q^*sinβ_i +
q θ_σ_q^*cosβ_i
) ]e^*
.
.
+[pθ^*_
+2/3(qθ^*_sinβ_i +
q θ_σθ^*_cosβ_i
) ]θ^*_^*}
where the eigenbasis spin N_i^*^* and θ^*_^* are computed as a function of the invariants and principal components of ^* from Eqs. (<ref>) and (<ref>) respectively.
If _q^* is not nil, at least two eigenvalues of the strain predictor ^* are distinct.
Specifically, if θ^*_=±π/6 two eigenvalues of ^* will be coincident. In this case, from Eq. (<ref>) it will result that e^* will be proportional to the deviatoric part of the eigenbasis associated to its non-repeated eigenvalue. Hence, from Eq. (<ref>) one simply obtains:
σ=p (_v^*,_q^*,θ^*_) I+2 /3 _q^* q (_v^*,_q^*,θ^*_) e^*
Also the eigenbasis of the deviatoric part of the plastic strain increment Δe^p and of the elastic strains will coincide with those of e^*, and then it will result:
Δ^p=^p_v/3I+_q^p/_q^*e^*
^e=^e_v/3+_q^e/_q^*e^*
The jacobian matrix can be obtained simplifying Eq. (<ref>) using Eq. (<ref>). It yields:
σ^*=
p_v^*I⊗I+ 2/3_q^*[p_q^*(I⊗e^*)
+q_v^*(e^* ⊗I)
.
.
+ 2/3 _q^*(q_q^* -q/_q^*) ( e^* ⊗e^*)
+ q ( ℐ-1/3I⊗I)
]
If _q^* is nil, the strain predictor ^* will be a volumetric tensor, since its spectral decomposition has the same structure of Eq. (<ref>). Moreover, _q^*=0 implies e^* = 0.
Since the material is isotropic, the eigenbasis of σ and ^* the same, resulting to be coincident with the second-order identity tensor I. Then, from Eq. (<ref>) it will result
σ= p (_v^*) I
The derivative of the eigenbasis is undefined. However, as explained in the section above, the Jacobian Matrix can be obtained as a limit case of Eq. (<ref>), i.e., using Eq. (<ref>).
Let observe that, under purely volumetric conditions, the convexity of the elastic potential requires <cit.>:
p_q^* =q_v^*=0
It results:
σ^*=
p_v^*I⊗I+ 2/3q_q^*( ℐ-1/3I⊗I)
§.§ Computation of logarithmic strain tensor from displacement gradient
In the framework of large strains and rotations, let p denotes the reference coordinate system.
Indicating with u( p) the vector function describing the displacement of each material point, it results that its final position will be (i.g. <cit.>)
x=p+ u( p)
The deformation gradient F is defined as
F= ∇_p x = I+∇_p u( p)
By applying the polar decomposition (i.g. <cit.>) to the deformation gradient F, one obtains:
F = VR
where the orthogonal tensor R describes the local rotation, whilst the symmetric positive definite tensor
V is the left stretch tensor, where
V^2 = B = FF^T
B being the left Cauchy-Green tensor.
The logarithmic strain tensor can be computed as:
=lnV=1/2lnB
i.e.,
= 1/2∑ln( λ^B_i) N^B_i
where λ^B_i and N^B_i are the i-th principal component and eigenbasis of the tensor B respectively.
The invariants of B, I_1B, J_2B
and θ_B can be computed using Eqs. (<ref>), (<ref>) and (<ref>), whilst the principal components λ^B_i can be obtained using Eqs. (<ref>).
If λ^B_i are distinct, i.e., if J_2B 0 and |θ_B| π/6, all the eigenbasis N^B_i of the left Cauchy-Green tensor can be computed as a function of its invariants and its principal components using Eq. (<ref>). The logarithmic strain tensor can be computed using Eq. (<ref>).
The jacobian matrix B can be computed by using Eq. (<ref>):
B=1/2∑[ ln( λ^B_i)
N^B_iB
+ 1/λ^B_iN^B_i⊗N^B_i]
where N^B_iB can be computed using Eq. (<ref>).
When two principal components of B are coincident, i.e. if J_2B 0 and |θ_B| = π/6, one can compute by exploiting the proportionality between the deviatoric part b of B and e.
Let start by computing the invariants q_=√(3 J_2) and I_1 of as a function of q_B= √(3 J_2B) and I_1B. Let observe that it results
q_B = ±( λ^B_II-λ̂^B) θ_B = ±π/6
By solving this expression for λ^B_II one obtatins
λ^B_II= λ̂^B ± q_B θ_B = ±π/6
Substituting this result into the definition of I_1B=λ̂^B + 2λ^B_II and solving for λ̂^B gives
λ̂^B= I_1B∓ 2 q_B/3 θ_B = ±π/6
By substituting this expression into Eq. (<ref>) one obtains
λ^B_II= I_1B± q_B/3 θ_B = ±π/6
One can now compute the invariants of as a function of those of B. It results:
I_1 =λ̂^+ 2 λ^_II = 1/2[
ln( I_1B∓ 2 q_B/3)+2ln( I_1B± q_B/3)
],
q_= ±( λ^_II-λ̂^) = ±1/2( lnλ^B_II - lnλ̂^B ) = ±1/2ln( I_1B± q_B/I_1B∓ 2 q_B)
θ_ = θ_B = ±π/6
The logarithmic strain tensor can be finally computed using Eq.(<ref>). It results:
= I_1/3I + q_/q_B b
Its derivative can be obtained by applying Eq. (<ref>). It results:
B= 1/3I_1I_1BI⊗I∓1/2I_1q_BI⊗N̂^d_B
+ q_/q_B(ℐ - 1/3I⊗I)
+ 3/2(q_q_B - q_/q_B) N̂^d_B ⊗N̂^d_B
∓q_I_1BN̂^d_B ⊗I θ_ = θ_B = ±π/6
where, from Eq. (<ref>):
N̂^d_B= ∓1/q_Bb θ_B = ±π/6
and, by computing the derivatives of Eq. (<ref>):
I_1I_1B=3(± q_B - I_1B)/(I_1B± q_B)(± 4 q_B -2 I_1B)
I_1q_B=3 q_B/(± 2 q_B-I_1B)(I_1B± q_B)
q_I_1B = 3 q_B/(I_1B± q_B)(± 4 q_B -2 I_1B)
q_q_B = - 3 I_1B/(I_1B± q_B)(± q_B -2 I_1B))
θ_ = θ_B = ±π/6
Finally, if J_2B = 0, then the logarithmic strain will be purely volumetric, and it will result λ_i^B = λ^B.
Eqs. (<ref>) become:
I_1 = 3/2ln( I_1B/3) = 3/2lnλ^B
q_= 0
By applying Eq. (<ref>) it will result:
=
1/2lnλ^B I
To compute the derivative of with respect to B, let start substituting Eqs. (<ref>) into Eqs. (<ref>). It results:
I_1I_1B=3/2 I_1B = 1/2 λ^B
I_1q_B=q_I_1B = 0
q_q_B = 3/2 I_1B = 1/2 λ^B
By substituting these expressions into Eq. (<ref>) one obtains:
B= 1/2λ^Bℐ
§ CONCLUSIONS
The spectral representation of a symmetric, second-order tensor is an important tool in
many applications of computational mechanics.
While the computation of the eigenvalues of a symmetric, second-order tensor is a relative simple task, obtaining a closed-form expression for the eigenbasis is more complicate, especially when some eigenvalue is repeated. Moreover, in many computational mechanics applications, also the derivative of the spectral representation is required. The exact closed-form expressions available in the literature for both the eigenbasis and their derivative are quite hard to implement (see, e.g., <cit.>). For this reason, many Authors suggest to resort to series expansions, that however are available only specific functions (see, e.g., <cit.>, <cit.>) or
require automatic differentiation techniques for a generic function <cit.>,
These approximate techniques are hard to apply
when the isotropic tensor-valued functions are not known explicitly, such as, for instance,
in the numerical integration of elastoplastic isotropic constitutive laws formulated in invariants space (<cit.> <cit.> <cit.>).
In this paper, starting from a incidental result reported by Ogden <cit.> working only in the case of not coincident eigenvalues, an exact, simple and clear approach has been developed. Differently from that described by Miehe <cit.>, <cit.> no particular requirements about the invertibility of the tensor, or its eigenvalues multiplicity are necessary.
Two applications have been presented: (i) the computation of stress tensor and of the stiffness matrix in the case of the numerical integration of an elastoplastic isotropic material in the invariant stress space, and (ii) the calculation of the logarithmic strain tensor from the displacement gradient, as well as its derivative with respect to the left Cauchy-Green tensor.
plain
|
http://arxiv.org/abs/2307.06239v1 | 20230712152815 | A generalized mass-to-horizon relation: a new global approach to entropic cosmologies and its connection to \texorpdfstring{$Λ$}{Lambda}CDM | [
"Hussain Gohar",
"Vincenzo Salzano"
] | gr-qc | [
"gr-qc",
"astro-ph.CO"
] |
0327
|
http://arxiv.org/abs/2307.04167v1 | 20230709132458 | Dream Content Discovery from Reddit with an Unsupervised Mixed-Method Approach | [
"Anubhab Das",
"Sanja Šćepanović",
"Luca Maria Aiello",
"Remington Mallett",
"Deirdre Barrett",
"Daniele Quercia"
] | cs.CY | [
"cs.CY",
"cs.CL",
"physics.soc-ph",
"H.4.0; K.4.0"
] |
Possible open charm molecular pentaquarks from Λ_cK^(*)/Σ_cK^(*) interactions
Qi Huang^2[Corresponding author]
August 12, 2023
=============================================================================
§ INTRODUCTION
Dreaming is a fundamental human experience and a cornerstone of sleep psychology, yet its underlying mechanisms remain elusive. The fascination with the contents and meaning of our dreams dates back to early human civilizations,<cit.> but despite significant progress in dream research, fundamental questions about the physiological and psychological functions of dreaming remain unanswered, leaving us to ponder the question: why do we dream?<cit.> Previous literature on dream analysis suggests that dreaming plays a vital role in learning processes,<cit.> has psychotherapeutic effects,<cit.> and safeguards the brain's neuroplasticity.<cit.> Recent AI-inspired theories have drawn parallels between the brain's activity during sleep and the functioning of artificial neural networks. One theory suggests that dreams act as “garbage collectors” that clear the memory,<cit.> while another posits that they prevent over-fitting of the brain's neuronal network.<cit.>
While the question of why we dream remains open, before we can even begin to answer it, we must first understand the nature of what we dream. This question is important not only for helping us understand the fundamental function of dreams but also as it offers a window into our psyche and what is prominent in people's minds. Dream content is composed of fragments of waking life experiences and events,<cit.> but these are not veridical replays, <cit.> nor do they represent the entirety of dream content.<cit.> One popular approach to investigating dream content systematically is through content analysis.<cit.> This is a family of methods that analyze quantitatively the elements present in dreams to answer specific questions, such as whether depressed individuals experience more rejection in their dreams,<cit.> how dream content changes from teenage years to adulthood,<cit.> or whether in the times of collective health crises there is a shift in the medical symptoms people dream about.<cit.> Although these studies may seem narrow in scope, they provide a critical foundation for addressing the overarching question of why we dream. The importance of content analysis is evidenced by the development of over 130 scales and rating systems for dream content analysis.<cit.> Early scales tended to score based on the raters' subjective interpretation of dream symbolism and rarely reported inter-rater reliability.<cit.> Dream research became more systematic with the development of the Hall and van de Castle method <cit.> of dream content analysis, a quantitative system that scores dream reports based on the frequency and type of characters, interactions, activities, emotions, settings, and objects present in the dream. This method relies solely on the dream reports and does not use any additional information provided by the dreamer. Studies using the Hall and van de Castle scales have revealed consistent patterns and norms in dream content across different groups of people, such as gender, age, culture, and personality. For instance, women tend to dream more about family members, friends, and indoor settings, while men tend to dream more about strangers, violence, and outdoor settings. Moreover, research using these scales has demonstrated that dream content correlates with an individual's waking concerns and interests, such as work, relationships, hobbies, and fears.<cit.>
Limited content scope and representatives of existing scales. Traditionally, dream researchers had to manually sift through large numbers of dream reports to gain insights into the range of dream topics.<cit.> Even with recent developments in automated dream analysis,<cit.> most studies continued to rely on experimenter-driven content searches, i.e., supervised approaches for content analysis, such as the Hall and van de Castle method. These methods involve the use of predetermined categories that are often biased towards existing knowledge of dreams. For example, dreams are often characterized as bizarre, involving impossible or improbable events that deviate from everyday experiences,<cit.> which may not fit within the predetermined categories established in the literature. As a result, current approaches to dream analysis may miss important aspects of dream content that fall outside of these predetermined categories. In contrast to traditional methods of dream coding, unsupervised theme discovery from dream reports may provide a fresh perspective and a more comprehensive understanding of the categorization of dream content and its relationship with waking life events.
Furthermore, previous studies relied on dream reports collected through retrospective surveys that are susceptible to memory biases, and laboratory studies that may be confounded by the strong impact of the laboratory setting on dream content.<cit.> Therefore, a more ecologically valid approach to studying dream content at the population level is necessary.
Our unsupervised mixed-method approach for dream content discovery. Our study addresses the identified research gap (i.e., limited scope and representatives of dream content scales) by developing an unsupervised mixed-method approach for dream content discovery, and by applying it to new large-scale data that more closely approximates spontaneous dream recollections than survey studies. To achieve this, we i) leveraged recent advances in AI for Natural Language Processing (NLP) and ii) used a crowd-sourced dataset of dream self-reports from the community on Reddit. Unlike traditional lab studies, dream experiences shared on are reported voluntarily and spontaneously, enabling us to collect a large set of dream reports and conduct an ecological study. We collected over 44K dream reports from more than 34K Reddit users over the past five years, and applied the BERTopic content analysis method <cit.> to automatically discover topics in each dream report. The resulting taxonomy includes 22 themes that can be broken down into 217 more specific topics. Confirming its validity, we found that most of the themes in our taxonomy align with the dream element categories present in the Hall and Van de Castle scale, but the specific topics inside those themes provide a description of dreams that is much more detailed. Going beyond what was possible with existing scales, our method also allowed to uncover the importance and relationships among specific topics and themes.
To demonstrate the applications enabled by our method, we used the metadata from the community, and classified dreams into four types: nightmares, recurring dreams, vivid dreams, and lucid dreams. Our analysis revealed that each type of dream has distinct characteristics and prominent topics. Notably, nightmares were associated with scary and shadowy imagery, and sexual-assault scenes. Vivid dreams featured rich expressions of feelings and topics inducing extreme emotions such as pregnancy and birth, religious figures, war, and aliens. Lucid dreams were characterized by topics of control, and by an overarching theme of mental reflections and interactions. For recurring dreams, we found that the most salient topics were dating, sex, and cheating, with recurring themes related to school and mentions of parts of the human body. Additionally, we investigated the relationship between dream topics and real-life experiences by studying the evolution of topics over the past five years. Our findings showed that the COVID-19 outbreak coincided with a gradual and collective shift in dream content. People started to dream less about people and relationships, feelings, sight and vision, outdoor locations, and movement and action, and more about the human body, especially teeth and blood, violence and death, religious and spiritual themes, and indoor locations. Similarly, after the war in Ukraine started the topics about soldiers and nuclear war both peaked.
§ RESULTS
Figure <ref> outlines the framework of our study. It consist of three main stages: 1) data preprocessing, which includes collecting dream reports, ideally in an ecological setting, and using NLP methods for cleaning the content; 2) topic modelling, which is the core NLP stage for topic modelling. Once the topics and themes in the dream reports collection are discovered, various applications are supported, and we demonstrate three of those: 3) building a dream topics taxonomy, which allows to uncover the relationships between individual themes and topics, as well as the frequency of each of them in the dream collection; 4) finding topics and themes by dream types, which as an application that uses a proposed measure of topic or theme importance in a dream and odds-ratio analysis to discover topics that are specific to a dream type (or any other dream reports subset of choosing), and 5) finding topics and themes through time, which is an application using the proposed topic or theme importance in a dream to quantify the prevalence of dream topics and themes through time.
§.§ Dream reports from Reddit
In the established literature of dream analysis, dream reports are defined as “the recollection of mental activity which has occurred during sleep.”<cit.> To gather a dataset of such reports, we turned to Reddit, a social media platform organized in communities known as subreddits. Using the PushShift API<cit.>, we collected data from , a subreddit where members share their dreams and engage informally in their interpretation — which is common in therapy contexts <cit.> — as well as in providing social and emotional support to other members.
The subreddit was established in September 2008, and as of June 2022, it had accumulated 280K subscribers. We collected over 185K posts published on from March 2016 to September 2022; prior to 2016, the community was almost inactive. Authors on annotate their posts with one or more tags selected from a fixed set community-specific labels called flairs. Flairs on denote posts that contain dream reports of a given type (Short Dream, Medium Dream, Long Dream, Nightmare, Recurring Dream, and Lucid Dream) or posts that contain discussions about dreams in general (e.g., Dream Help, Dream Art, or Question). To ensure that we considered only posts that contained dream reports, we only kept the 44,213 unique posts tagged with dream-type flairs (Figure <ref>). In our analysis, a dream report was the concatenation of the title and body of each of these posts.
§.§ Our unsupervised mixed-method approach for dream content discovery
As mentioned above, the core part of our approach shown in Figure <ref> is the unsupervised mixed-method topic modelling and it involves: 2.1) extracting topics using an advanced NLP topic model, such as BERTopic <cit.>; 2.2) grouping topics into themes using a clustering method to group embedding representations of individual topics discovered in the previous step into themes, and 2.3) filtering non-dream content and adjusting themes, which is a mixed-method part of the proposed methodology, requiring human, ideally dream experts knowledge to filter out topics or themes that do not pertain to the actual dream content, as well as to adjust any topic to theme associations, if needed. For more details about any of the steps of our approach, please refer to Methods section.
§.§ The Reddit taxonomy of dream topics
Using state-of-the-art topic modeling techniques, we have identified 217 semantically-cohesive topics that emerged from the dream reports analyzed (details in Methods). We assigned to each dream report the list of unique topics that our model was able to extract from the report text. The distribution of number of dreams associated with a topic is broad (Figure SI5), with only 38 topics being associated with at least 1000 dream reports. Table <ref> shows the 20 most frequent topics.
To offer a more concise representation of dream topics, we automatically clustered the 217 fine-grained topics into 22 themes (see Methods), which we then manually parsed to assess their conformity to categories from the dream coding system by Hall and Van de Castle (HVdC for brevity). HVdC defines 12 categories and several subcategories of elements empirically relevant for quantitative dream analysis. All themes but one (a miscellaneous theme containing diverse topics) matched some HVdC category or subcategory (Table <ref>).
At a high level of abstraction, HVdC views the dream as a cast of characters (i), interacting with each other (ii), while being immersed in some background setting (iii). These three aspects emerged in the most prominent themes extracted from Reddit dream reports. The largest theme, both in terms of the number of topics it contains (17) and the number of dream reports associated with it (20K), is People and relationships. The main topics included in this theme are family members and relationships, and intimacy and romance (see Table <ref> and Table 1 SI). Characters and interactions are also represented in themes concerning Animals, Supernatural entities, and Religious and Spiritual, which map directly to the two HVdC subcategories of Animals and Imaginary Characters. Interactions respectively of aggressive and friendly type are represented in the themes Violence and Death and Life Events. The second most frequent group of themes involves events or elements that are typical of well-characterized places, such as home or the workplace. Among them, the most prominent theme is Indoor locations, which includes 16 topics such as house, hospital, and mall. These themes map to different subcategories of the HVdC category of Settings.
The remaining themes correspond to the categories of Activities, Objects, Emotions, and Time in the HVdC framework. With respect to Activities, Reddit users more frequently reported their mental activities and perceptions, rather than physical actions, when recounting their dreams. Specifically, visual and auditory sensory experiences were often recounted. The most commonly described categories of Objects included body parts (often with gruesome details) and personal objects, with a particular emphasis on technological devices such as phones or elements from virtual worlds of computer games. Emotions were represented by a single theme, characterized by common formulas for describing emotions of any type, with negative emotions being more prominently featured. Finally, while one theme captured temporal scales, such references were infrequent, appearing in fewer than 1,000 dreams.
Some of the HVdC categories did not map directly to any of the themes in our taxonomy. This is the case with Misfortune/Fortune, for example. We found that these types of events are usually reported as elements in dreams that were predominantly characterised by other themes, such as Life events, People and relationships, or Violence and death, for example.
The emerging themes in the dream reports do not exist in isolation; they often co-occur in the same reports and jointly construct their narratives. Figure <ref> presents the backbone of the network of co-occurrence of these themes, where the connections between themes are weighted proportional to the number of dreams in which they co-occur. People and relationships and Indoor locations are the most frequent and central themes, occurring often with many other ones. Feelings are mainly associated with actions and relationships rather than objects or settings; conversely, sensory elements are more associated with settings, especially outdoor locations. Some of the central topics, such as relationships and emotions, are highly valued in rating scales by dream researchers. However, other themes, such as indoor/outdoor settings, are frequently omitted in HVdC coding research.
Besides the prominence of both types of settings, their proportions differ somewhat. Our data show a greater predominance of indoor settings, while HVdC norms indicate lower levels of indoor settings: 49% for males and 61% for females <cit.>. This difference could reflect our data, which includes two years of the COVID pandemic, whereas the comparison HVdC studies do not.
Our findings challenge traditional dream analysis literature that found males mostly dream of male characters. <cit.> The two most recurring topics in our data concern the presence of female characters in dreams, despite the young male dominance in the Reddit user base. <cit.>
A possible reason for this contradicting finding could have to do with HVdC ratings not being exactly equivalent to ours. In our method, the strength of association of a dream with topics concerning male characters and indoor settings is stronger the more mentions of male characters and indoor settings appear in the report. Conversely, in HVdC, a male character who is mentioned only once in the dream gets the same score as one who is referred to in every sentence of the report. Likewise, an indoor setting is scored in the same way regardless whether it is inferred from one mention in the dream or whether the dream account is largely a description of an indoor setting. In that sense, our methods allows for a richer characterisation and quantification of dream content.
§.§ Topics and themes by dream type
Using odds ratios (see Methods), we uncovered that certain topics (Table <ref>) and themes (Figure <ref>) appeared more frequently in dreams of specific types.
§.§.§ Topics and themes in Nightmares
Results in Table <ref> revealed among the top topics specific for nightmares, keywords such as shadows, rape, sexual-assault, scary, creepy, violence, demons, 911, and blood.
In terms of themes, Religious and spiritual is the most prominent in nightmares, followed by Feelings, Supernatural entities, and Violence and death, while the least prominent ones were Media and tech and Time, time travel and timelines.
§.§.§ Topics and themes in Vivid dreams
Results presented in Table <ref> reveal
among topics specific for vivid dreams, keywords such as
felt-real, religious, felt-right, felt-wrong, apocalyptic, felt-pain, baby birth and pregnancy, aliens, mirrors, and nuclear war. This has translated into the most prominent themes being Feelings, followed by Sights and vision and Life events.
§.§.§ Topics and themes in Lucid dreams
Results presented in Table <ref> reveal
among topics specific for lucid dreams, keywords such as
control, alternate-reality, and reflection and mirrors, felt-real, couldn-speak, falling, heard-voices and demon. When considering themes, Mental reflection and interactions, followed by Movement and action, Sights and vision, and Feelings were the most prominent.
§.§.§ Topics and themes in Recurring dreams
Results in Table <ref> revealed among the top topics specific for recurring dreams, keywords such as cheating, ex, teeth, school, house and apartment, time-travel, and sex-dreams.
Looking at the themes, we found School, Human body, especially teeth and blood, and Other topics (size, smell, apocalypse, etc.) being those that characterize recurring dreams.
§.§ Topics and themes through time
Finally, we studied the evolution of topics and themes over time. We focused on the period with at least 300 monthly dream reports, namely from January 2019 to September 2022. We found that soon after COVID-19 started, there was a gradual collective shift in the content of dreams from (Figure <ref>).
At the very beginning of the COVID-19 outbreak (February-March 2020), and even more so after the first peak of recorded deaths (April 2020), people gradually dreamt less of Sight and vision, Outdoor locations, Movement and action, and Mental reflections and interactions, while they dreamt more of Religious and spiritual figures, Indoor locations, and Human body, especially teeth and blood. An example dream from this period talks about “spitting teeth and cornea onto the palm.” We found a sharp decrease in the frequency of Life events topics in February 2020, while in March 2020 there we recorded a peak of mentions of Human body, especially teeth and blood, which are predominantly found in nightmares, as our previous analysis showed. These trends continued throughout the time of the COVID-19 second death peak (January 2021), from when we also detect a stark decrease in dreams of other People and relationships, Feelings, and, for a while, of Animals and Work.
Finally, the start of the war in Ukraine (February 2022), is associated with a strong transition on the content of people's dreams towards violent topics. We found a sharp increase in topics from the themes of Violence and death, and Other topics (size, smell, apocalypse, etc.). Example dreams from the former group talk about “being a murderer,”,“tooth falling out,” and “getting shot in the head.” Example dreams from the second group talked about “temple spirit monster,” and “being attacked by an opossum.” The changes in response to external events were also evident at the level of individual topics, such as those about soldiers and nuclear war, which both peaked after the war in Ukraine started (Figure <ref>).
§ DISCUSSION
The current study advances the field of dream science by implementing a new methodology to study dreams in a more objective and ecological manner and also showcasing how this method can generate new insights into dreams.
§.§ Reddit as a source of dreams
Prior dream research has relied heavily on traditional laboratory, survey, and diary methods. Laboratory studies benefit from monitoring participants with PSG and waking them directly from REM sleep,<cit.> which is known to increase dream recall drastically.<cit.> However, these dreams are not representative of natural dream content, as they are highly influenced by the laboratory setting.<cit.> Survey studies avoid this contextual bias on dream content, but often ask of a “recent” dream, which might be days or weeks prior to the survey response, thus suffering from memory distortion. Lastly, diary studies track dream content in a participant sample through morning diaries.<cit.> While these studies benefit from having dream content collected from ecological settings and with less retrospective bias, these studies are often limited by small sample sizes due to the high burden of participation. In the current study, we extracted morning dream reports from social media, thus capturing dreams in an ecological setting and also at a much larger scale. While other studies have investigated dedicated online dream forums,<cit.> Reddit is one of the most popular social media sites and its usage continues to grow at higher rates than specialized forums, suggesting that the current approach might continue to be a source of population dream content for scientific analysis.
§.§ Unsupervised generation of dream themes
In using this ecological data source to generate common dream themes, our results complement previous studies using more traditional survey methods. Though it is widely accepted that dream content varies based on individual personality and cultural differences, previous research suggests there might also be thematic “universals” that appear in a disproportionately high amount of dreams. Universal dream themes are typically quantified using surveys with predetermined thematic content developed by the researcher,<cit.> which are biased towards existing knowledge of dreams. In the current study, our unsupervised approach to developing common dream themes confirms some previously developed themes while also offering more specificity within them. For example previous survey studies using the Typical Dreams Questionnaire<cit.> or Dream Motif Scale<cit.> have identified common dream themes of failure, paranoia, snakes/insects and animal symbolism, alien life, fighting, and sex. Our results identified similar themes, while offering a finer-grained view with the subtopics that formed each theme. For example, we observed a popular theme of animals, and animal subtopics included what might be positive animal dreams (kitten, birds) and negative animal dreams (spider, maggots; snake, bite). It is difficult to compare the ranking of our dream themes with prior work, since prior work is heavily-dependent on the method of data collection.<cit.> A notable advance of our approach is the ability to “map out” the relationship between dream themes presence. The co-occurence of common themes has not been studied extensively before, and future work comparing the co-occurence of waking and dreaming themes might help to uncover what is truly unique about dream content.
§.§ Phenomenology of dream subtypes
Dreams are highly varied experiences, and have long been grouped into subclasses or types of dreams (e.g., nightmares, lucid dreams). However, these dreams are often defined by a single
feature (e.g., nightmares as intensely negative dreams) and the phenomenological variety within each subtype is not well understood. The present results offer new insights into the consistent-yet-variable content of nightmares, lucid dreams, vivid dreams, and recurring dreams, some of which have important clinical implications.
Nightmares are defined as intensely negative dreams, sometimes with a secondary requirement that they result in a direct awakening, <cit.> and have immense clinical relevance in PTSD <cit.> and other psychiatric diagnoses. <cit.> Our observation of keywords relating to sexual assault highlight prior observations of uniquely episodic event replay in PTSD patients, <cit.> including survivors of sexual assault.<cit.> The additional observation of increased themes of Feeling and Violence suggest that the episodic replays are highly emotional recreations of violent events, and the inclusion of many escape-related words highlights the helplessness felt by many recurrent nightmare sufferers. <cit.> For nightmare sufferers, the negative affect during dreams <cit.> or the dream recall during the day might increase negative symptoms of other co-morbid diagnoses (e.g., anxiety). <cit.> Lastly, the heightened presence of supernatural entities in nightmares might relate to the common state of sleep paralysis, an under-studied and cross-cultural phenomenon that occurs during sleep-wake transitions and frequently involves a feeling of helplessness amidst a hallucinated “demon” or otherwise frightening figure.<cit.>
Vivid dreams are highly realistic (or similarly, well-remembered) dreams. Our unsupervised approach suggests that vivid dreams are not only realistic (e.g., keywords felt-real, real-like), but also that these dreams often contain major life events, strong emotions, and supernatural/religious experiences. Vivid dreams included births and pregnancies, missiles and war, apocalyptic events, and alien invasions. This dream subtype overlaps almost directly with a class of dreams referred to as “big dreams.” <cit.> Big dreams occur rarely, but when they do, they are highly meaningful experiences that make a significant and long-lasting impression on waking life. Thus, our analysis of vivid dreams might be representative of these dreams, given that they consisted of major life events and religious experiences that likely influenced future thinking. Interestingly, the presence of an Alien invasion theme in vivid/realistic dreams suggests that prior reports of UFO abductions might result from cases of dream-reality confusion, <cit.> where a dreamt abduction is misinterpreted as a memory from waking life. <cit.>
Lucid dreams are defined as those that include awareness of the dream while still dreaming, <cit.> sometimes with a secondary requirement of having control over the dream. <cit.> The present unsupervised analysis was consistent with these defining features, with common themes and keywords in lucid dreams such as mental reflection and control. Additionally, our results confirm more recent preliminary findings about lucid dream content, such as frequent episodes of flying <cit.> and an overlap with sleep paralysis. <cit.> Though lucid dreams are generally regarded as more positive in valence than non-lucid dreams, <cit.> there are more recent reports of extremely negative lucid dreams, or lucid nightmares. <cit.> Our results offer a cohesive explanation for these differential findings, in that we observed a general heightened realness and emotion in lucid dreams (themes of Feeling and Sights and visions and keywords of felt-real) without attachment to positive or negative valence. Our recent findings, focused on a different subreddit (r/LucidDreaming) suggest that positively-valenced lucid dreams are more likely to occur when dream control is involved, <cit.> and the current results highlight the importance of focusing future clinical applications of lucid dreaming on the dream control rather than simply awareness of the dream (see <cit.> for a review of the clinical efficacy of lucid dreams to treat nightmares).
A recurrent dream is one that is experienced repeatedly, and these have been estimated to occur at least once in roughly 75% of the population.<cit.> It is likely that the many themes and keywords we observed in recurrent dreams are related to waking anxieties or worries. The limited amount of prior work on recurrent dream phenomenology suggests that recurrent dreams are primarily related to waking anxieties <cit.> and other negative content, <cit.> and also increase in frequency during periods of stress. <cit.> Our analysis extends these findings by observing more specific anxieties in recurrent dream content. The top two themes were related to relationships, particularly negative aspects of relationships (i.e., cheating, ex-partners), and other common themes were even still related to relationships (e.g., sex, dating, crush). Other recurrent dream themes were explicitly negative, and at times ultraviolent; themes regarding serial killers, paralysis, sexual assault, and the apocalypse suggest that recurrent dreams are far more negative than positive or neutral. We suspect that many of these recurrent dreams would also be classified as nightmares (see statistics in Figure SI1), and thus our analysis might help future work in the prediction/monitoring of nightmares via the inclusion of recurrent themes.
§.§ Impact of major collective events on dreams
Previous research has revealed that major personal and cultural events might influence dream content. For example, an increase in nightmares was observed after the terrorist attacks of September 11th, 2001 <cit.> and during the COVID-19 pandemic. <cit.> Dreams during the COVID-19 pandemic have also been shown to include pandemic-related content. The present results expand on these prior findings by offering a finer-grained view into the topical and temporal impact of major events on dreams. Rather than a categorical increase in nightmares or pandemic-related content, our analysis allowed us to identify specific sub-themes of pandemic-related topics and how they change over time. During the COVID-19 pandemic, population dreams transitioned from outdoor to indoor locations and decreased in social interactions, as did our waking experiences during the pandemic. Interestingly, these two effects had qualitatively different time courses, where the location change of dreams was longer-lasting and more persistent than social changes. This might reflect the mass adoption of technical communications (e.g., Zoom gatherings) that occurred while people were still mostly indoors. These effects are consistent with prior work showing a continuity between wake and dream content <cit.> and future work might evaluate how subtopics contribute uniquely to this continuity. <cit.> These results also contribute to hypothesized dream functions, such as the drop in social content contradicting the Social Simulation Theory that predicts a compensatory effect of social activity in dreams. <cit.> We also observed more mental reflections in dreams, which might reflect a “cognitive continuity” hypothesis <cit.> if the population was more reflective during the increased loneliness during the pandemic.<cit.> With the state of mental reflections during waking being less established than location and social changes, our finding suggests that it might be possible in the future to infer waking cognitive changes based on those observed in dream content. Alternatively, the rise in mental reflection during dreams might be representative of the increase in lucid dream frequency during the pandemic.<cit.>
While dreams during the COVID-19 pandemic have been investigated at great length <cit.>, much less work has been dedicated to observing the influence of the Russo-Ukrainian war on sleep and dream patterns. We observed a population increase in negative war-related topics (e.g., violence, death) after the start of the war, which is notable because our social media sample is expected to be primarily American users and thus not those who were directly exposed to the war. The association between the war and population dreams could be a result of American media exposure (see also <cit.>, highlighting a cognitive continuity between dreams and wake, where it is not daily activities per se, but the thought processes and internal imagery that predicts dream content). The negative content of population dreams during the war has important implications, given recent findings that negative dreams, including specifically dreams of death, are predictive of next-day negative affect <cit.> and nightmares have extensive mental health implications.<cit.>
§.§ Limitations
The first limitation of our work has to do with potential biases in the dream reports that we analysed. The first bias is that Reddit users report dreams that they have recalled spontaneously. However, according to the salience hypothesis, dream content that is vivid, new, intense, or strange, is more easily remembered.<cit.> Moreover, personality traits also affect dream recall, so that people with higher openness to experience, and who are prone to imagination and fantasy, more often recall their dreams.<cit.> Finally, the users of Reddit are not a representative sample of the general population; they are known to be more male, young, educated, and urbanites.<cit.>
The second limitation is about the elements from dream reports not reflecting the dream content. Given the free-form social media format, the users would not always describe only their dream report, but sometimes they would include some contextual information, for example recounting how they felt when they were woken up from the dream, or how the people they dreamt of are related to them in real life. For example, a Reddit user might drop in their dream report a sentence like “It was about 5 am when I woke up from this dream...” Such contextual information is not a direct description of the dream content, yet it often helps to qualify it and it is therefore useful to our analysis. Dream researchers analysing a small number of dreams could read through each report and manually remove such instances to focus on dream content only. Given the automatic topic extraction method that we employed, such a data cleaning step was not possible. Some common categories of contextual information that were unrelated to the dream experience (e.g., the author explaining when they last time met the person they dreamt of, or what time they woke up from the dream) emerged as independent topics in our topic analysis and we removed them (see Methods, Clustering Topics).
The third limitation of our work is shared with other content analysis methods.<cit.> We inevitably lose some dream information that could not be captured by the topics. This also means that our approach cannot represent subtle aspects of the individuality of each dreamer.
§.§ Implications
Our work has three main implications:
Developing an unsupervised dream content analysis method. The development of dream scoring systems has historically favored certain aspects of dream reports over others, based on assumptions about what are the most emotionally and socially important dimensions in dream analysis, rather than considering all potential topics or types of human experiences. Previous attempts to apply machine learning to dream content analysis have either replicated existing scoring systems<cit.> or focused on specific types of dream content (e.g., symptoms<cit.>). In contrast, our study used a deep learning approach that prioritizes no particular topic. By analyzing dream content from an exclusively linguistic standpoint, we were able to discover and quantify themes that have not been previously considered.
Uncovering the first ecological taxonomy of dream topics. Our results demonstrate that many of the themes we discovered align with those captured by existing scoring systems, particularly the Hall and Van de Castle scale, as it includes includes topics such as interpersonal relationships, emotions, and friendly and violent interactions. However, our approach also revealed differences in the frequency of certain topics, such as a significant category of weather-related topics that have not been previously considered in any dream scoring system.
The significance of weather as a conversational topic varies, being mentioned as a safe subject for small talk or as the subject of jokes regarding dull conversations. Nevertheless, considering the evolutionary perspective, weather played a crucial role during the era of limited shelters and the absence of temperature controls, which shaped human instincts. Hence, weather likely holds a deep-rooted importance for our well-being and survival. To sum up, our findings support the continued use of traditional scoring systems in clinical psychology research and psychotherapy, where there is a strong rationale for focusing on the more emotional and social content of dreams. At the same time, our results suggest that AI tools can provide a more detailed and nuanced understanding of dream content, and may be mature enough to support dream analysts in their work.
Collaborating with AI in scientific discovery. The AI's ability to categorize dream content in ways unknown to human researchers is reminiscent of the scenario when chess and go-playing programs began surpassing human players. These programs didn't merely excel at the strategies employed by humans, instead, they developed unique strategies that had never been observed by humans before. It was assumed that humans approached both games based on evolutionary instincts developed for social interactions, while the AI adopted a more objective perspective, solely focusing on the game rules without any assumptions influenced by human endeavors. Similarly, AI comprehends texts based solely on their intrinsic content, without filtering them through instinctual categories primarily designed for interactions during waking states.
§ METHODS
§.§ Data pre-processing
For the purpose of topical analysis, we employed the model from Spacy (<spacy.io>) to segment dream reports into individual sentences. The corpus consisted of 44,213 dream reports, for a total of 761,619 sentences. On average, each dream report contains 17 sentences and 290 words. Notably, recurring dreams tended to be shorter, with 14 sentences and 253 words, while lucid dreams were longer, with 27 sentences and 465 words (Table <ref>). It is worth mentioning that the distribution of dream reports was not concentrated among a few individuals; the majority of users shared a single dream report, whereas only a small group of frequent users contributed more than 30 dream reports (Figure <ref>).
We discarded the top frequent 10,000 sentences with the fewest characters. These sentences typically contained non-dream related content, such as greetings to the reader (hello, hi, etc.), sentences consisting solely of special characters, and similar instances. This step of removing a significant number of sentences that were not relevant to the dream report itself ensured that our subsequent labeling process for non-dream topics was sufficient to preserve predominantly dream topics.
§.§ Topic Modelling
Our topic modelling procedure consisted of the following five steps: i) extracting topics, ii) grouping topics into themes, iii) building dream topics taxonomy, iv) finding topics and themes by dream type, and v) finding topics and themes through time.
§.§.§ Extracting topics
BERTopic <cit.> is largely based on a neural model that is designed to identify latent topics within document collections. Unlike conventional topic modeling techniques such as Latent Dirichlet Allocation, BERTopic leverages semantic information by utilizing embeddings as an initial step to cluster documents into semantically cohesive topics. Each discovered topic in BERTopic is described by a list of 10 topic words, which are the most distinctive words associated with that particular topic. The topics are numbered based on their frequency rank, indicating their prevalence within the corpus.
In its default configuration, BERTopic assigns a single topic to each document. However, our manual inspection of the dream reports revealed that most dreams cannot be adequately characterized by a single topic. Instead, they often encompass multiple topics, such as the dream's location, the people involved, and the emotions experienced by the dreamer. Additionally, dreams are known for combining various elements from waking experiences, resulting in a sense of bizarreness. To address this, our initial alternative was to modify BERTopic to associate a distribution of topics with each dream report. We tested this approach by allowing up to ten topics per report. Subsequently, we applied a threshold to the probabilities associated with each topic to identify the relevant topics for each report. However, we encountered two issues with this method. Firstly, due to the substantial variability in the length of dream reports, we often missed relevant topics in longer dreams. Secondly, even with varying the threshold, over 55% of dreams ended up being associated with no topic, resulting into a considerable loss of data. To mitigate this issue, we opted to consider individual dream sentences as input documents for BERTopic. Applying BERTopic at sentence-level is a practice recommended by the BERTopic authors on the official website of the tool. Such a solution enabled us to associate over 88% of dream reports with at least one topic.
To establish robust topic representations, we configured the hyperparameters of BERTopic. We set the minimum frequency threshold () to 10, ensuring that a word appears in at least 10 sentences before it is considered for inclusion in the topic representation. This criterion helped to ensure that topics were formed based on words with a reasonable level of occurrence within the dream reports. Additionally, we employed the Maximal Marginal Relevance (MMR) algorithm with a diversity parameter set to 0.4. The MMR algorithm was utilized to enhance the diversity of the topic words, preventing dreams from being described solely by near-synonyms. By incorporating this diversity measure, the resulting topics encompassed a wider range of relevant terms, capturing distinct aspects and avoiding redundant or similar descriptions within the topic representation.
Our model successfully extracted a total of 288 topics which included both dreaming and non-dreaming subjects. An exhaustive manual inspection of the topic words demonstrated a significant degree of semantic coherence across the majority of topics: for most topics, the top four topic words provided sufficient information to understand the essence of the topic. To ensure topic specificity, we excluded all instances of the terms “dream” and “dreams” from the list of topic words associated with each topic. Furthermore, we aggregated topics related to multiple sentences within the same dream, consolidating them into a comprehensive list. This approach allowed us to capture the overarching themes and content associated with individual dreams more effectively.
§.§.§ Grouping topics into themes
To provide a concise overview of the 288 identified topics, we used clustering techniques to group them into broader, yet semantically-coherent themes. To achieve this, we used the Sentence Transformer model all-mpnet-base-v2 <cit.> to project each topic word into a 768-dimensional embedding space. For each topic t, every word w^t within that topic was assigned a probability (p_w^t) by BERTopic, indicating its contribution to the overall representation of the topic. To compute an embedding for a topic (t), we calculated the weighted sum of the embeddings of its topic words, where each word's embedding was weighted by its normalized probability (p_w^t):
t = ∑_w^temb(w^t) · p_w^t,
where emb is the sentence transformer model that maps the topic words into embeddings, and w^t are all the words associated with topic t.
To facilitate semantic clustering of the topics, we standardized their embeddings and reduced their dimensionality from 768 to 10 using UMAP. We then explored two clustering algorithms, K-Means and DBSCAN, applied to the reduced topic embeddings. Through manual inspection, we observed that the K-Means clustering method effectively generated semantically coherent clusters, with most topics within each cluster sharing a cohesive theme. We also observed that K-Means with 20 clusters produced the result with the best quality. Further details on the analysis can be found in Section SI2.1.
§.§.§ Filtering non-dream content and adjusting themes
K-Means give us 20 clusters that contained dreaming and non-dreaming topic clusters, as well as a mixture of both. Despite the potential inclusion of non-relevant elements in dream reports, BERTopic's remarkable semantic capabilities proved highly effective in distinguishing the actual recollections from other content. Leveraging this capability, we employed a filtering process to exclude non-dream themes and topics from our analysis.
To achieve this, we manually classified each theme into one of five categories:
* Dream-related content – encompassing elements such as dream locations, animals, and people.
* Dream types – including nightmares and recurring dreams.
* Dreaming-waking interface topics – covering experiences like waking up crying or feeling confused, as well as being awakened by an alarm.
* Waking phenomena – addressing aspects such as mental health issues and the date and time of the dream.
* Social media artifacts – encompassing elements like expressions of gratitude to the reader or requests for dream interpretation.
To assign these categories, we relied on the 10 topic words associated with each topic within a theme. In cases where necessary, we also consulted three representative dream sentences generated by BERTopic, along with 20 randomly sampled dream sentences. In most instances, the topics naturally fell into one of the predefined categories, allowing us to filter out entire themes comprising non-dream content.
We found three non-dream clusters: Dream types, Dreaming-waking interface topics, and Social media artifacts as discussed in the above. Notably, Waking phenomena was not present as a coherent cluster. We had to manually inspect all the 288 topics and top corresponding dream sentences, which allowed us to separate out this category.
For example, from the theme Time, Time Travel, and Timelines, we excluded topics such as `5am currently clock checked phone' and `date 2018 June.' Similarly, in the theme Mental Reflection and Interactions, we omitted topics such as `interpret, does really mean' and `don remember details.'
After this filtering step, we ended up with the final 217 dream topics.
In addition, some composite clusters contained divergent dream themes (i.e., semantically dissimilar overarching themes). For instance, there was a cluster talking about Outdoor environment and space, which could be deconstructed into three themes: Outdoor locations, and Space and Weather. So, we manually split up such clusters (and also carried out minimal re-adjustment of topics into appropriate clusters (e.g., our largest topic (0 lady, face, looked, head) was incorrectly present in a cluster which we later termed Other topics, and which is as a catch-all for topics which did not fit into a semantically coherent cluster. We moved this topic into People and relationships).
After carrying out this whole process, we end up with 217 topics clustered into 22 dream themes.
§.§ Building dream topics taxonomy (co-occurrence network)
To create a taxonomy of dream topics, we employed a network-based approach that explores the interplay between dream themes and their constituent topics within dream reports. To facilitate this analysis, we conducted an initial assessment of the frequency distributions of both topics and themes across the entire corpus of dream reports.
For topics, we linked each one to a dream report (/dreamer) if the topic was found at least once in the report (in all dream reports of that dreamer), and counted the number of dreams (/dreamers) associated with each topic. In Table <ref>, we present the topic name, top 4 topic words, number of dreams, and number of dreamers for the top 20 most frequent topics. Full statistics (e.g., the 10 topic words and number of sentences associated with each topic) for these and the rest of the topics, can be found in the Supplementary Information File (see Data Availability section).
Similarly, we linked a dream report (/dreamer) to a theme, if any of the theme's constituent topics was found at least once in the report (/in all dream reports of that dreamer). Additionally, we computed the number of topics in each theme associated with corresponding dream reports (dreamers). In Table <ref>, we present the theme name, top 3 topics associated with it, the total number of topics in it, and number of dreams, and number of dreamers associated with it. Additionally, we linked to a corresponding HVdC category each theme for which such a link is found. The full list of topics belonging to each theme can be found in Table SI1.
Having these frequencies at hand, we built the co-occurrence network of dream themes as follows. Each node in the network represents a theme, and pairs of themes were connected by an edge if they co-occurred in a dream report. The edges were weighted by the number of dream reports in which such a co-occurrence was found. This undirected network had a single connected component with 22 nodes and edge weights ranging from 13 to 7643 (Mean = 762.31 ± 928.54 and median = 482.0).
For the purpose of visualization, we used backboning to sparsify the network by preserving the most important edges. We used noise-corrected backboning <cit.>—a technique that relies on a statistical null-model to identify and prune non-salient edges—with a backboning threshold of 3.8 (which reduced the network from 231 to 46 edges). We used Gephi <cit.> to visualize this network (see Figure <ref>). We scaled the size of the nodes according to the number of dreams associated with each theme.
§.§ Finding topics and themes by dream type (odds-ratio analysis)
In addition to discovering common topics across all dreams, we studied whether specific types of dreams (i.e., nightmares, lucid, vivid, and recurring dreams) are characterised by particular topics and themes.
Odds-ratio metric allowed us to do so as it compares the odds of a topic occurring in the specific type of dreams (e.g., recurring) to the odds of the same topic occurring in the rest of dreams. We first assigned the dream reports to the corresponding dream types by searching for relevant keywords (‘nightmar’, ‘recurring’ or ‘re-occurring’, ‘lucid’, ‘vivid’) in Reddit post title and body or, if the dream type had a corresponding flair (which were present for nightmares and recurring dreams only). If either of these conditions were satisfied, we assigned the dream to the experimental subset; else to the control subset.
First, we defined a topic or theme t's importance in a dream d as:
I_t,d = # sentences mentioning topic t in d/# total sentences in d
We then computed the odds ratio for all topics and themes across the 4 dream types as follows:
Odds Ratio (DT, t) = Odds of DT association with t/Odds of the rest of dreams association with t =
∑_d ∈ DT I_t,d/# dreams in DT that do not contain t/∑_d ∉ DT I_t,d/# dreams that are not in DT that do not contain t
where DT is a dream type, t is a topic or theme.
§.§ Finding topics and themes through time
Analyzing the temporal dynamics of dream topics and themes holds particular significance, especially in light of major events such as the COVID-19 pandemic, known to have a substantial impact on collective dreaming patterns. <cit.>
For our analysis, we focused on a monthly timescale, leveraging data spanning from January 2019 to August 2022, encompassing a total of 44 months. The inclusion criterion for each month required a minimum of 300 dream reports, ensuring robust statistical representation. Throughout this period, February 2019 recorded the lowest number of dreams (n = 366), whereas January 2021 exhibited the highest dream count (n = 1573). The average number of dreams per month was 925 ± 332, with a median of 935 dreams per month.
We used topic/ theme importance in a dream (I_t,d) introduced in Equation <ref> to calculate topic/ theme importance at a given point in time m (i.e., month) as follows:
I_t,m = ∑_(d posted at time m) I_t,d/# dreams posted at time m
We tracked z-scores of topic/ theme importances I_t,m through time, to understand the relative change of a topic/ theme w.r.t. itself:
z_I_t,m = I_t,m - μ_I_t,m/σ_I_t,m
To additionally improve the quality of signals, we used the centered average with a window length of 5 for smoothing the temporal plots:
z_I_t,m = ∑_t=(t-2)^(t+2)z_I_t,m.
10
rm
url<#>1urlprefixURL doiprefixDOI:
mota2020dream
authorMota-Rolim, S. A. et al.
journaltitleThe dream of god: how do religion
and science see lucid dreaming and other conscious states during sleep?
Frontiers in Psychology
volume11, pages555731
(year2020).
schredl2010characteristics
authorSchredl, M.
journaltitleCharacteristics and contents of
dreams.
International review of neurobiology
volume92, pages135–154
(year2010).
hobson1977brain
authorHobson, J. A. & authorMcCarley, R. W.
journaltitleThe brain as a dream state
generator: an activation-synthesis hypothesis of the dream process.
The American journal of psychiatry
(year1977).
hartmann1995making
authorHartmann, E.
journaltitleMaking connections in a safe place:
Is dreaming psychotherapy?
Dreaming volume5,
pages213 (year1995).
eagleman2021defensive
authorEagleman, D. M. & authorVaughn, D. A.
journaltitleThe defensive activation theory:
Rem sleep as a mechanism to prevent takeover of the visual cortex.
Frontiers in neuroscience
volume15, pages632853
(year2021).
crick1983function
authorCrick, F. & authorMitchison, G.
journaltitleThe function of dream sleep.
Nature volume304,
pages111–114 (year1983).
hoel2021overfitted
authorHoel, E.
journaltitleThe overfitted brain: Dreams
evolved to assist generalization.
Patterns volume2,
pages100244 (year2021).
schredl2010dream
authorSchredl, M.
titleDream content analysis: Basic principles
(publisherUniversitätsbibliothek der Universität
Heidelberg, year2010).
payne2010memory
authorPayne, J. D.
journaltitleMemory consolidation, the diurnal
rhythm of cortisol, and the nature of dreams: a new hypothesis.
International review of neurobiology
volume92, pages101–134
(year2010).
wamsley2010dreaming
authorWamsley, E. J., authorTucker, M.,
authorPayne, J. D., authorBenavides, J. A. &
authorStickgold, R.
journaltitleDreaming of a learning task is
associated with enhanced sleep-dependent memory consolidation.
Current Biology
volume20, pages850–855
(year2010).
revonsuo1995content
authorRevonsuo, A. & authorSalmivalli, C.
journaltitleA content analysis of bizarre
elements in dreams.
Dreaming volume5,
pages169 (year1995).
beck1961dreams
authorBeck, A. T. & authorWard, C. H.
journaltitleDreams of depressed patients:
Characteristic themes in manifest content.
Archives of general psychiatry
volume5, pages462–467
(year1961).
fogli2020our
authorFogli, A., authorMaria Aiello, L. &
authorQuercia, D.
journaltitleOur dreams, our selves: automatic
analysis of dream reports.
Royal Society Open Science
volume7, pages192080 (year2020).
vscepanovic2022epidemic
authorŠćepanović, S., authorAiello,
L. M., authorBarrett, D. & authorQuercia, D.
journaltitleEpidemic dreams: dreaming about
health during the covid-19 pandemic.
Royal Society open science
volume9, pages211080 (year2022).
winget1979dimensions
authorWinget, C. & authorKramer, M.
titleDimensions of dreams
(publisherUniversity of Florida, year1979).
hall1966content
authorHall, C. S. & authorde Castle Robert L., V.
titleDream content analysis: Basic principles
(publisherAppleton-Century-Crofts, year1966).
elce2021language
authorElce, V., authorHandjaras, G. &
authorBernardi, G.
journaltitleThe language of dreams: application
of linguistics-based approaches for the automated analysis of dream
experiences.
Clocks & Sleep volume3,
pages495–514 (year2021).
colace2003dream
authorColace, C.
journaltitleDream bizarreness reconsidered.
Sleep and Hypnosis
volume5, pages105–128
(year2003).
picard2021dreaming
authorPicard-Deland, C., authorNielsen, T. &
authorCarr, M.
journaltitleDreaming of the sleep lab.
Plos one volume16,
pagese0257738 (year2021).
devlin2018bert
authorDevlin, J., authorChang, M.-W.,
authorLee, K. & authorToutanova, K.
journaltitleBert: Pre-training of deep
bidirectional transformers for language understanding.
arXiv preprint arXiv:1810.04805
(year2018).
baumgartner2020pushshift
authorBaumgartner, J., authorZannettou, S.,
authorKeegan, B., authorSquire, M. &
authorBlackburn, J.
titleThe pushshift reddit dataset.
In booktitleProceedings of the international AAAI
conference on web and social media, vol. volume14,
pages830–839 (year2020).
keller1995use
authorKeller, J. W. et al.
journaltitleUse of dreams in therapy: A survey
of clinicians in private practice.
Psychological Reports
volume76, pages1288–1290
(year1995).
grootendorst2022bertopic
authorGrootendorst, M.
journaltitleBertopic: Neural topic modeling
with a class-based tf-idf procedure.
arXiv preprint arXiv:2203.05794
(year2022).
domhoff2003scientific
authorDomhoff, G. W.
titleThe scientific study of dreams: Neural
networks, cognitive development, and content analysis.
(publisherAmerican Psychological Association,
year2003).
hargittai2020potential
authorHargittai, E.
journaltitlePotential biases in big data:
Omitted voices on social media.
Social Science Computer Review
volume38, pages10–24
(year2020).
siclari2013assessing
authorSiclari, F., authorLaRocque, J. J.,
authorPostle, B. R. & authorTononi, G.
journaltitleAssessing sleep consciousness
within subjects using a serial awakening paradigm.
Frontiers in psychology
volume4, pages542 (year2013).
nemeth2022route
authorNemeth, G.
journaltitleThe route to recall a dream:
Theoretical considerations and methodological implications.
Psychological Research pages1–24
(year2022).
schredl2002questionnaires
authorSchredl, M.
journaltitleQuestionnaires and diaries as
research instruments in dream research: Methodological issues.
Dreaming volume12,
pages17–26 (year2002).
sanz2018experience
authorSanz, C., authorZamberlan, F.,
authorErowid, E., authorErowid, F. &
authorTagliazucchi, E.
journaltitleThe experience elicited by
hallucinogens presents the highest similarity to dreaming within a large
database of psychoactive substance reports.
Frontiers in neuroscience
volume12, pages7 (year2018).
nielsen2003typical
authorNielsen, T. A. et al.
journaltitleThe typical dreams of canadian
university students.
Dreaming volume13,
pages211 (year2003).
yu2012pornography
authorYU Kai Ching, C.
journaltitlePornography consumption and sexual
behaviors as correlates of erotic dreams and nocturnal emissions.
Dreaming (year2012).
mathes2014frequency
authorMathes, J., authorSchredl, M. &
authorGöritz, A. S.
journaltitleFrequency of typical dream themes
in most recent dreams: An online study.
Dreaming volume24,
pages57 (year2014).
gieselmann2019aetiology
authorGieselmann, A. et al.
journaltitleAetiology and treatment of
nightmare disorder: State of the art and future perspectives.
Journal of sleep research
volume28, pagese12820
(year2019).
campbell2016nightmares
authorCampbell, R. L. & authorGermain, A.
journaltitleNightmares and posttraumatic stress
disorder (ptsd).
Current Sleep Medicine Reports
volume2, pages74–80 (year2016).
sheaves2022nightmares
authorSheaves, B., authorRek, S. &
authorFreeman, D.
journaltitleNightmares and psychiatric
symptoms: A systematic review of longitudinal, experimental, and clinical
trial studies.
Clinical Psychology Review
pages102241 (year2022).
phelps2008understanding
authorPhelps, A. J., authorForbes, D. &
authorCreamer, M.
journaltitleUnderstanding posttraumatic
nightmares: An empirical and conceptual review.
Clinical psychology review
volume28, pages338–355
(year2008).
krakow2002nightmare
authorKrakow, B. et al.
journaltitleNightmare frequency in sexual
assault survivors with ptsd.
Journal of anxiety disorders
volume16, pages175–190
(year2002).
rousseau2018mechanisms
authorRousseau, A. & authorBelleville, G.
journaltitleThe mechanisms of action underlying
the efficacy of psychological nightmare treatments: A systematic review and
thematic analysis of discussed hypotheses.
Sleep medicine reviews
volume39, pages122–133
(year2018).
mallett2021relationship
authorMallett, R. et al.
journaltitleThe relationship between dreams and
subsequent morning mood using self-reports and text analysis.
Affective Science pages1–6
(year2021).
blagrove2004relationship
authorBlagrove, M., authorFarmer, L. &
authorWilliams, E.
journaltitleThe relationship of nightmare
frequency and nightmare distress to well-being.
Journal of sleep research
volume13, pages129–136
(year2004).
denis2018systematic
authorDenis, D., authorFrench, C. C. &
authorGregory, A. M.
journaltitleA systematic review of variables
associated with sleep paralysis.
Sleep Medicine Reviews
volume38, pages141–157
(year2018).
bulkeley2011big
authorBulkeley, K. & authorHartmann, E.
journaltitleBig dreams: An analysis using
central image intensity, content analysis, and word searches.
Dreaming volume21,
pages157 (year2011).
wamsley2014delusional
authorWamsley, E., authorDonjacour, C. E.,
authorScammell, T. E., authorLammers, G. J. &
authorStickgold, R.
journaltitleDelusional confusion of dreaming
and reality in narcolepsy.
Sleep volume37,
pages419–422 (year2014).
holden2002alien
authorHolden, K. J. & authorFrench, C. C.
journaltitleAlien abduction experiences: Some
clues from neuropsychology and neuropsychiatry.
Cognitive neuropsychiatry
volume7, pages163–178
(year2002).
mallett2021exploring
authorMallett, R. et al.
journaltitleExploring the range of reported
dream lucidity.
Philosophy and the Mind Sciences
volume2, pages1–23 (year2021).
windt2018spontaneous
authorWindt, J. M. & authorVoss, U.
journaltitleSpontaneous thought, insight, and
control in lucid dreams.
The Oxford handbook of spontaneous thought:
Mind-wandering, creativity, and dreaming pages385–410
(year2018).
picard2020flying
authorPicard-Deland, C., authorPastor, M.,
authorSolomonova, E., authorPaquette, T. &
authorNielsen, T.
journaltitleFlying dreams stimulated by an
immersive virtual reality task.
Consciousness and Cognition
volume83, pages102958
(year2020).
mainieri2021sleep
authorMainieri, G. et al.
journaltitleAre sleep paralysis and false
awakenings different from rem sleep and from lucid rem sleep? a spectral eeg
analysis.
Journal of Clinical Sleep Medicine
volume17, pages719–727
(year2021).
mallett2020partial
authorMallett, R.
journaltitlePartial memory reinstatement while
(lucid) dreaming to change the dream environment.
Consciousness and Cognition
volume83, pages102974
(year2020).
schredl2022differences
authorSchredl, M., authorFuchs, C. &
authorMallett, R.
journaltitleDifferences between lucid and
nonlucid dream reports: A within-subjects design.
Dreaming (year2022).
voss2013measuring
authorVoss, U., authorSchermelleh-Engel, K.,
authorWindt, J., authorFrenzel, C. &
authorHobson, A.
journaltitleMeasuring consciousness in dreams:
the lucidity and consciousness in dreams scale.
Consciousness and Cognition
volume22, pages8–21 (year2013).
mallett2022benefits
authorMallett, R., authorSowin, L.,
authorRaider, R., authorKonkoly, K. R. &
authorPaller, K. A.
journaltitleBenefits and concerns of seeking
and experiencing lucid dreams: benefits are tied to successful induction and
dream control.
Sleep Advances volume3,
pageszpac027 (year2022).
schredl2020lucid
authorSchredl, M. & authorBulkeley, K.
journaltitleLucid nightmares: An exploratory
online study.
International Journal of Dream Research
pages215–219 (year2020).
stumbrys2018lucid
authorStumbrys, T.
journaltitleLucid nightmares: A survey of their
frequency, features, and factors in lucid dreamers.
Dreaming volume28,
pages193 (year2018).
ouchene2023effectiveness
authorOuchene, R., authorEl Habchi, N.,
authorDemina, A., authorPetit, B. &
authorTrojak, B.
journaltitleThe effectiveness of lucid dreaming
therapy in patients with nightmares: A systematic review.
L'Encéphale (year2023).
vallat2018sleep
authorVallat, R., authorEskinazi, M.,
authorNicolas, A. & authorRuby, P.
journaltitleSleep and dream habits in a sample
of french college students who report no sleep disorders.
Journal of Sleep Research
volume27, pagese12659
(year2018).
zadra1996recurrent
authorZadra, A.
journaltitleRecurrent dreams and their relation
to life events and well-being.
Trauma and dreams pages231–247
(year1996).
robbins1992comparison
authorRobbins, P. R. & authorTanck, R. H.
journaltitleA comparison of recurrent dreams
reported from childhood and recent recurrent dreams.
Imagination, Cognition and Personality
volume11, pages259–262
(year1992).
weinstein2018linking
authorWeinstein, N., authorCampbell, R. &
authorVansteenkiste, M.
journaltitleLinking psychological need
experiences to daily and recurring dreams.
Motivation and Emotion
volume42, pages50–63
(year2018).
duke2002ordinary
authorDuke, T. & authorDavidson, J.
journaltitleOrdinary and recurrent dream recall
of active, past and non-recurrent dreamers during and after academic
stress.
Dreaming volume12,
pages185–197 (year2002).
propper2007television
authorPropper, R. E., authorStickgold, R.,
authorKeeley, R. & authorChristman, S. D.
journaltitleIs television traumatic?: Dreams,
stress, and media exposure in the aftermath of september 11, 2001.
Psychological Science
volume18, pages334–340
(year2007).
gorgoni2022dreaming
authorGorgoni, M., authorScarpelli, S.,
authorAlfonsi, V. & authorDe Gennaro, L.
journaltitleDreaming during the covid-19
pandemic: A narrative review.
Neuroscience & Biobehavioral Reviews
pages104710 (year2022).
margherita2022observatory
authorMargherita, G. & authorCaffieri, A.
journaltitleAn observatory on changes in
dreaming during a pandemic: a living systematic review (part 1).
Journal of Sleep Research
pagese13742 (year2022).
schredl2003continuity
authorSchredl, M. et al.
journaltitleContinuity between waking and
dreaming: A proposal for a mathematical model.
Sleep and Hypnosis
volume5, pages38–52 (year2003).
tuominen2022no
authorTuominen, J., authorOlkoniemi, H.,
authorRevonsuo, A. & authorValli, K.
journaltitle‘no man is an island’: Effects
of social seclusion on social dream content and rem sleep.
British Journal of Psychology
volume113, pages84–104
(year2022).
domhoff2017invasion
authorDomhoff, G. W.
journaltitleThe invasion of the concept
snatchers: The origins, distortions, and future of the continuity
hypothesis.
Dreaming volume27,
pages14 (year2017).
kauhanen2022systematic
authorKauhanen, L. et al.
journaltitleA systematic review of the mental
health changes of children and young people before and during the covid-19
pandemic.
European child & adolescent psychiatry
pages1–19 (year2022).
kelly2022lucid
authorKelly, P. et al.
journaltitleLucid dreaming increased during the
covid-19 pandemic: An online survey.
Plos one volume17,
pagese0273281 (year2022).
solomonova2021stuck
authorSolomonova, E. et al.
journaltitleStuck in a lockdown: Dreams, bad
dreams, nightmares, and their relationship to stress, depression and anxiety
during the covid-19 pandemic.
Plos one volume16,
pagese0259040 (year2021).
watson2003dream
authorWatson, D.
journaltitleTo dream, perchance to remember:
Individual differences in dream recall.
Personality and individual differences
volume34, pages1271–1286
(year2003).
coscia2017network
authorCoscia, M. & authorNeffke, F. M.
titleNetwork backboning with noisy data.
In booktitle2017 IEEE 33rd International Conference
on Data Engineering (ICDE), pages425–436
(organizationIEEE, year2017).
bastian2009gephi
authorBastian, M., authorHeymann, S. &
authorJacomy, M.
titleGephi: an open source software for exploring and
manipulating networks.
In booktitleProceedings of the international AAAI
conference on web and social media, vol. volume3,
pages361–362 (year2009).
barrett2020dreams
authorBarrett, D.
journaltitleDreams about covid-19 versus
normative dreams: Trends by gender.
Dreaming volume30,
pages216 (year2020).
schredl2020dreaming
authorSchredl, M. & authorBulkeley, K.
journaltitleDreaming and the covid-19 pandemic:
A survey in a us sample.
Dreaming volume30,
pages189 (year2020).
§ ACKNOWLEDGEMENTS
L.M.A acknowledges the support from the Carlsberg Foundation through the COCOONS project (CF21-0432). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
§ AUTHOR CONTRIBUTIONS STATEMENT
A.D., S.Š, and L.M.A. conceived and designed the experiment(s), A.D. conducted the experiment(s), A.D., S.Š, L.M.A., R.M., D.B., and D.Q. analysed the results. All authors wrote and reviewed the manuscript.
§ DATA AVAILABILITY STATEMENT
The data generated in this study is publicly available and can be accessed at https://doi.org/10.6084/m9.figshare.23618064.v2https://doi.org/10.6084/m9.figshare.23618064.v2.
§ SUPPLEMENTARY INFORMATION
§ DATA STATISTICS (EXTENDED)
Fig. <ref>, <ref>, <ref> & <ref> provide additional insights into the dream reports. We were able to geolocate 4.04% (n=1784) of all dream reports (distributed amongst 3.22% (n=1423) users) at the US state level. Fig. <ref> demonstrates representative coverage of our data in all 50 US states; using the aforementioned data subset.
§ METHODS & IMPLEMENTATION DETAILS (EXTENDED)
§.§ Details of Topic Modelling
BERTopic embeds all the documents into vectors, projects them onto a lower dimensional space using UMAP, clusters the reduced embeddings using HDBSCAN (Hierarchical Density-based Spatial Clustering of Applications with Noise), and finally generates the topic names using a class-based variation of TF-IDF, namely c-TF-IDF.
In the application of BERTopic, we set the hyperparameters of the all-mpnet-base-v2 sentence transformer model such that the min. no. of dream sentences required to form a topic is 100 (min_topic_size = 100) and it automatically merges similar topics together using HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise) (nr_topic = `auto').
After fitting the model, we used its functionality to update the topic representation i.e., modify the topic words to remove English stop words and take into account both unigrams and bigrams as topic words/phrases. This is particularly important for 2-word phrases like sleep paralysis, recurring dreams and, serial killers which we found, quite frequently mentioned in the extracted dataset of topics.
The probabilities of all topic words for each topic sum to 1; however since we removed “dream” and “dreams” topic words, we had to re-normalize the topic word probabilities to ensure they summed to 1.
Fig. <ref> captures the distribution of no. of dreams reports, dreamers & the no. of dream sentences linked to a topic. There were 1982 (4.48%) dreams distributed amongst 1277 dreamers which weren't associated with any topic at all. While there were 4978 (11.26%) dreams amongst 3416 dreamers which weren't associated with any dreaming topics or themes.
Fig. <ref> shows the plots for selecting the no. of clusters, post application of K-Means to topic embeddings.
§.§ Details of building dream topics taxonomy (co-occurrence network)
Fig. <ref> shows the plot used to determine the backboning threshold for visualizing the theme co-occurrence network.
§.§ Details of finding topics and themes through time
We did not include dreams from September 2022 (last month in our dataset) in temporal analyses as we only collected data up until 7th September 2022. Manual inspection revealed that, during 2018, the temporal curves for the raw z-scores were quite noisy to accurately infer anything, hence we further restricted our analyses from January 2019 to August 2022. For July and August 2022, since we did not have the data from the subsequent months for smoothing, we used min_periods = 0 for the rolling function in Pandas which helped to compute the average by excluding the subsequent months' data. For calculating the smoothed z-scores of January and February 2019, we leveraged the values from November and December 2018, prior to discarding them, as discussed above.
§ RESULTS (EXTENDED)
Table <ref> provides an exhaustive list of dream topics and themes they belong to; as discovered by our proposed methodology.
|
http://arxiv.org/abs/2307.07431v1 | 20230714155338 | Quasinormal modes of rotating black holes in higher-derivative gravity | [
"Pablo A. Cano",
"Kwinten Fransen",
"Thomas Hertog",
"Simon Maenaut"
] | gr-qc | [
"gr-qc",
"astro-ph.HE",
"hep-th"
] |
=1
|
http://arxiv.org/abs/2307.04272v1 | 20230709215943 | Nuclear-spin-dependent corrections to the transition polarizability in cesium | [
"D. Xiao",
"H. B. Tran Tan",
"A. Derevianko"
] | physics.atom-ph | [
"physics.atom-ph"
] |
The Stark-interference technique is commonly used to amplify the feeble parity-violating signal in atomic experiments. As a result, interpretation of these experiments in terms of electroweak observables requires knowledge of the Stark-induced E1 transition amplitudes or, equivalently, transition polarizabilities.
While the literature assumes that these transition polarizabilities do not depend on the nuclear spin, here we prove the contrary. The nuclear spin dependence arises due to hyperfine mixing of atomic states and requires a third-order perturbation theory (one hyperfine interaction and two electric-dipole interactions) treatment.
We demonstrate that the so far neglected tensor contribution appears in the transition polarizability and present numerical results for the nuclear-spin-dependent corrections to the 6S_1/2→7S_1/2 transition polarizability in ^133Cs. We investigate the effect of these corrections to transition polarizabilities on the extraction of the ^133Cs anapole moment from the Boulder experiment [Science 275, 1759 (1997)]. We also consider their effect on the extraction of the ratio between the scalar and vector transition polarizabilities from the measurements [Phys. Rev. A 55, 2 (1997)]. While the corrections are minor at the current level of experimental accuracy, our analysis provides a framework for future experiments.
Department of Physics, University of Nevada, Reno, 89557, USA
Department of Physics, University of Nevada, Reno, 89557, USA
[][email protected]
Department of Physics, University of Nevada, Reno, 89557, USA
Nuclear-spin-dependent corrections to the transition polarizability in cesium
A. Derevianko
August 12, 2023
=============================================================================
§ INTRODUCTION
In 1988, an experiment performed by the Boulder group <cit.> provided the first evidence of the nuclear-spin-dependent parity-non-conserving (PNC) interactions in ^133Cs atom, which later led to the discovery of the ^133Cs nuclear anapole moment <cit.>.
However, the extracted nuclear anapole moment <cit.> has been found to disagree with the nuclear-physics determination <cit.>.
In nuclear physics, to bridge different manifestations of PNC, theorists operate in terms of the weak meson-nucleon couplings <cit.>.
The weak meson-nucleon couplings propagate through
the nuclear structure evaluation of anapole moments <cit.> and other nuclear processes. Linear combinations of these couplings can be constrained by comparison with available experimental data and theoretical
estimates within the Standard Model framework.
In particular, the nuclear physics constraints come from the scattering of polarized protons on unpolarized
protons and ^4He targets as well as the emission of circularly polarized photons from ^18F and ^19F nuclei. These constraints form nuclear experimental bands whose intersection yields the nuclear physics determinations of the couplings. However, the bounds derived from the measured Cs anapole moment lie outside this nuclear physics favored region. Our paper is motivated in part by this tension between the nuclear and atomic physics determinations of the weak meson-nucleon couplings.
The anapole moment is extracted from the difference between the
two measured PNC amplitudes E1_PNC connecting different hyperfine components of the ground 6S_1/2 and the excited 7S_1/2 states in ^133Cs. The Boulder results read <cit.>
Im(E1_PNC)/β=
-1.6349(80) mV/cm
for 6S_1/2, F_i=4→7S_1/2, F_f=3 ,
-1.5576(77) mV/cm
for 6S_1/2, F_i=3→7S_1/2, F_f=4 .
Here, F is the grand total angular momentum in Cs formed by adding the nuclear spin I=7/2 and the total electronic angular momentum J, and
β is the vector transition polarizability.
A weighted average of the two values in Eq. (<ref>) yields the
nuclear-spin-independent electroweak observable (weak charge), while their difference – the nuclear-spin-dependent quantity (nuclear anapole moment).
Notice the appearance of the vector transition polarizability β in the results (<ref>), as the Boulder group used the Stark-interference technique <cit.>. This technique amplifies the feeble PNC effect by the means of an externally applied DC electric field which opens an additional Stark-induced excitation pathway for the nominally E1-forbidden
6S_1/2→ 7S_1/2 transition. Then the transition rate acquires a cross-term between the Stark-induced and PNC amplitudes. This interference term flips sign under parity reversals enabling its experimental extraction.
One of the assumptions made in the Boulder analysis is that β does not depend on the nuclear spin. Contrary to this assumption, here we identify nuclear spin-dependent corrections to the Stark-induced transition amplitudes or, equivalently, to the transition polarizabilities (β in particular).
While the effects of our newly-introduced corrections turn out to be negligible at the Boulder experiment's level of accuracy, our analysis provides a framework for future experimental efforts.
The paper is organized as follows. In Sec. <ref>, we review the Stark-interference technique and derive the second-order transition polarizabilities. The hyperfine-mediated corrections to the transition polarizabilities are derived in Sec. <ref> and numerically evaluated in Sec. <ref>. Our reanalysis of the Boulder APV experiment <cit.> is given in Sec. <ref>. We also compute correction to the experimentally extracted ratio of the vector and scalar transition polarizabilities in Sec. <ref>. While we keep the discussion sufficiently general, all our numerical work refers to the 6S_1/2→ 7S_1/2 transition in ^133Cs.
Unless stated otherwise, atomic units are used throughout.
§ GENERALIZATION OF STARK-INDUCED TRANSITION POLARIZABILITY
We are interested in driving an electric-dipole transition from the an initial state i to a final state f. We assume that these states are of the same parity, precluding E1 transitions. To open the otherwise forbidden E1 pathway, we apply a DC electric field which admixes intermediate states of opposite parity into i and f <cit.>.
The relevant amplitude for the resulting E1 transition between such mixed states can be derived in the second order of perturbation theory (see Ref. <cit.> for a detailed derivation). The two perturbations are the electric dipole interactions with the applied DC and driving laser fields.
The Stark-induced transition amplitude A_i→f is conventionally expressed in terms of the transition polarizability a_i→f as A_i→f= a_i→fℰ_s ℰ_L,
which factors out ℰ_s and ℰ_L, the static and laser field amplitudes, respectively. The transition polarizability for the transitions between two S_1/2 states is conventionally parameterized as <cit.>
a_i→f=
α(ε̂̌̂·ě̂) δ_F_iF_fδ_M_iM_f+
i β(ě̂×ε̂̌̂ ) ·⟨f|σ̌|i⟩ .
Here, the two atomic-structure-dependent quantities α and β are the scalar and the vector transition polarizabilities. The unit vectors ε̂̌̂ and ě̂ characterize polarizations of the laser and static electric fields, respectively. The states i and f are hyperfine basis states, e.g,
|i⟩ = |n_i(IJ_i)F_iM_i⟩ is a state of grand total angular momentum F_i obtained by the conventional coupling of the total electron angular momentum J_i and the nuclear spin I_i, with M_i and n_i
being the magnetic and principal quantum numbers. The matrix element of Pauli matrices σ̌ is understood as involving the angular parts of the wavefunctions.
Qualitatively, Eq. (<ref>) is obtained <cit.> in the second order of perturbation theory by recouping the product of two dipole couplings (Ď·ε̂̌̂) (Ď·ě̂) into a sum over the irreducible tensor operators (ITO) containing scalar products of compound tensors[A scalar product of two rank-k ITOs is understood as
P^(k)·Q^(k) = ∑_q=-k^k (-1)^q P^(k)_qQ^(k)_-q ,
and a compound ITO of rank Q is defined as
{P^(k_1)⊗R^(k_2)}_q^(Q)=∑_q_1q_2 C^Qq_k_1q_1k_2q_2 P^(k_1)_q_1 R^(k_2)_q_2 ,
where q_1 and q_2 label the spherical basis components of the ITOs with C^Qq_k_1q_1k_2q_2 being the conventional Clebsch-Gordan coefficients. ]
(ε̂̌̂⊗ě̂)^(Q)· (Ď⊗Ď)^(Q). Here, Ď is the electron electric dipole moment operator.
Based on the angular selection rules, the rank
Q can accept the values of 0, 1, and 2, corresponding to the scalar, vector, and tensor contributions. Hereto, previous analyses of the 6S_1/2→ 7S_1/2 transition polarizability in Cs have neglected the tensor (Q=2) contribution. The reason for this is that the dipole operators involve only electronic degrees of freedom and the matrix element of the rank-2 tensor between the S_1/2 states vanishes due to the angular selection rules. However, if we account for the hyperfine interaction (HFI), the states involved would need to be characterized by the grand-total angular momentum F and the tensor contribution would no longer vanish since F=3 or 4 for the hyperfine manifolds attached to the S_1/2 electronic states in ^133Cs. Notice that the inclusion of the HFI requires a third-order perturbation theory treatment and therefore leads to the tensor contribution being suppressed compared to the scalar and vector contributions.
The tensor contribution to Eq. (<ref>) can be parameterized as
a_i→f = ... + γ⟨f|{I⊗I}^(2)|i⟩(ε̂̌̂⊗ě̂)^(2) ,
where our newly-introduced tensor transition polarizability γ depends on both the nuclear and electronic structure. We have introduced an auxiliary rank-2 tensor {I⊗I}^(2) in front of the tensor polarizability to factor out the dependence on magnetic quantum numbers. Combined with this tensor term,
Eq. (<ref>) is the most general parametrization of the transition polarizability as long as we only keep interactions linear in the static and laser fields. It is worth noting that in the second order, due to a particular selection of prefactors in Eq. (<ref>), α and β do not depend on the hyperfine components of the initial and final states. We will demonstrate that the HFI-mediated corrections would introduce the F_i- and F_f-dependence to the scalar and vector polarizabilities.
Based on these arguments, and taking into account the fact that the HFI is a scalar (see the discussion in Sec. <ref>),
we rewrite Eq. (<ref>) in the following generalized form that now includes the tensor contribution (<ref>), as well as the F-dependence of the scalar and vector polarizabilities
a_i→f =-√(3(2F_f+1))w_0( ε̂̌̂,ě̂)α^F_i → F_fδ_F_iF_fδ_M_iM_f
-√(2)fσiw_1( ε̂̌̂,ě̂) β^F_i → F_f
+f{I⊗I}^(2)iw_2( ε̂̌̂,ě̂)γ^F_i → F_f ,
where we have used the Wigner-Eckart theorem and introduced the multipolar polarization weights <cit.>
w_Q( ε̂̌̂,ě̂) =(-1)^Q∑_M_Q(-1)^M_Q+F_f-M_f
×F_fQF_i-M_f-M_QM_i(ε̂̌̂⊗ě̂)_M_Q^(Q) ,
with M_f, M_Q, and M_i being the magnetic quantum numbers. Note that selection rules fix the value of M_Q = M_i -M_f.
The compound tensors of rank Q for the two vectors ε̂̌̂ and ě̂ are understood as
(ε̂̌̂⊗ě̂)^(Q)_M_Q=∑_μνC_1μ1ν^Q M_Qϵ̂_μê_ν, where C_1μ1ν^Q M_Q are Clebsch-Gordan coefficients and the A_μ components of a vector Ǎ in the spherical (or helicity) basis expressed in terms of its Cartesian components as <cit.>
A_0 = A_z, A_+1 = - (A_x + i A_y )/√(2), A_-1 = (A_x - i A_y )/√(2). In particular, the combinations of polarization vectors are (ε̂̌̂⊗ě̂)^(0) =-
(ε̂̌̂·ě̂)/√(3) and (ε̂̌̂⊗ě̂)^(1) =i
(ε̂̌̂×ě̂)/√(2), in agreement with Eq. (<ref>). We will consider the relevant components of the rank-2 tensor (ε̂̌̂⊗ě̂)^(2) in Sec. <ref>.
Here, we note that the reduced matrix elements of the auxiliary rank-2 tensor {I⊗I}^(2) present in Eq. (<ref>) is given by
f{I⊗I}^(2)i =(-1)^2F_i-F_f+I-J_f√(5) [F_f,F_i]^1/2
× I(I+1)[I]112IIIδ_J_iJ_f ,
where [J_1, J_2, ... J_n]≡(2J_1+1)(2J_2+1)…(2J_n+1).
For our target 6S_1/2→ 7S_1/2 transition in ^133Cs, the above expression evaluates to
F_f{I⊗I}^(2)F_i =(-1)^F_f6√(35)[F_f,F_i]^1/2 ,
which, for the special case where F_f,i=3,4, gives
3{I⊗I}^(2)3 = -42 √(35) ,
4{I⊗I}^(2)4 = 54 √(35) ,
3{I⊗I}^(2)4 = -126 √(5) ,
4{I⊗I}^(2)3 = 126 √(5) .
We will need these values in our analysis for ^133Cs
To reiterate, due to the HFI, the scalar, vector, and tensor transition polarizabilities entering Eq. (<ref>) have an F-dependence of the form (X=α,β,γ)
X^F_i → F_f = X^[2] + δ X^F_i → F_f ,
where the second-order term X^[2] is F-independent. For the S_1/2→ S_1/2 transitions, γ^[2]≡ 0, thereby
γ^F_i → F_f =δγ^F_i → F_f. Expressions for the second-order scalar and vector transition polarizabilities, α^[2] and β^[2], can be found, e.g., in Ref. <cit.>. Substantial attention <cit.> has been paid over the years to determining their accurate values since they are required for interpreting the results of APV experiments. As the reference values for the second-order polarizabilities for the 6S_1/2→ 7S_1/2 transition in ^133Cs, we use values computed recently by our group <cit.>
α^[2] = -266.31(23) ,
β^[2] = 26.912(30) ,
γ^[2] = 0 .
These values are in atomic units, a_0^3, where a_0 is the Bohr radius.
We now proceed to the derivation of the hyperfine corrections δα^F_i → F_f, δβ^F_i → F_f, and δγ^F_i → F_f to transition polarizabilities.
§ HYPERFINE CORRECTIONS TO TRANSITION POLARIZABILITIES
To evaluate the hyperfine-mediated corrections to the transition polarizability, we follow the third-order formalism developed in Refs. <cit.>. References <cit.> computed the static differential polarizabilities for transitions between levels of the hyperfine manifold attached to the S_1/2 ground state. Reference <cit.>
generalized that formalism to the evaluation of dynamic (AC) polarizabilities.
These papers focused on the characterization of clock shifts, which formally map into the evaluation of the diagonal matrix elements of the transition amplitude. Here we further generalize our earlier formalism and consider off-diagonal matrix elements of the transition amplitude. In the context of APV, Ref. <cit.> has considered transition polarizabilities (including tensor contribution) for transitions between hyperfine components attached to the Cs ground state.
The four relevant diagrams representing third-order contributions to the i→ f transition amplitude, top (T), center (C), bottom (B), and residual (R), are shown in Fig. <ref>, with each diagram involving one hyperfine interaction and two E1 interactions (one with the laser, and another one with the static field). These diagrams are named after the position of the hyperfine interaction in the string of three operators. Explicitly, these terms read
T_i→f =∑_abV_fa^HFI(ε̂̌̂·Ď_ab)(ě̂·Ď_bi)/ΔE_faΔE_ib
+∑_abV_fa^HFI(ě̂·Ď_ab)(ε̂̌̂·Ď_bi)/ΔE_faΔE_fb ,
B_i→f =∑_ab(ε̂̌̂·Ď_fa)(ě̂·Ď_ab)V_bi^HFI/ΔE_iaΔE_ib
+∑_ab(ě̂·Ď_fa)(ε̂̌̂·Ď_ab)V_bi^HFI/ΔE_faΔE_ib ,
C_i→f =∑_ab(ε̂̌̂·Ď_fa)V_ab^HFI(ě̂·Ď_bi)/ΔE_iaΔE_ib
+∑_ab(ě̂·Ď_fa)V_ab^HFI(ε̂̌̂·Ď_bi)/ΔE_faΔE_fb ,
R_i→f =-V_ii^HFI∑_a(ε̂̌̂·Ď_fa)(ě̂·Ď_ai)/(ΔE_ia)^2
-V_ff^HFI∑_a(ě̂·Ď_fa)(ε̂̌̂·Ď_ai)/(ΔE_fa)^2 ,
where Δ E_ij≡ E_i-E_j.
Note that the two terms inside each combination differ by the swap of the two polarization vectors ε̂̌̂ and ě̂. Otherwise, the structure of the terms is similar. Further, the bottom and top diagrams are related as B_f↔i=T^*_i↔f.
Before carrying out the angular reduction of the expressions above , we briefly review the hyperfine interaction present in Eqs. (<ref>).
Following notation of Ref. <cit.>,
the interaction of electrons with nuclear multipolar moments may be expressed as
V^HFI=∑_N𝒯^(N)·𝒩^(N) ,
where the rank-N tensors 𝒯^(N) act in the electron space, and 𝒩^(N) act in the nuclear space. Note that V^HFI is a scalar ITO.
The nuclear reduced matrix elements
γI𝒩^(N)γI are expressed in terms of the conventional nuclear magnetic-dipole (M1) μ and electric-quadrupole (E2) Q moments as
γI𝒩^(1)γI = √((2I+1)(I+1)/I)μ ,
γI𝒩^(2)γI =
√((2I +1)(I+1)(2I+3)/ 4I(2I -1)) Q .
Here the magnetic-dipole moment μ≡ g_I Iμ_N with μ_N being the nuclear magneton and g_I being the gyromagnetic ratio. For ^133Cs, g_I=0.73714.
As for the nuclear electric-quadrupole moment Q, the measured hyperfine constant B can be used to extract its value using theoretical values of the hyperfine electronic matrix elements. However, different measurements of B yield different determinations. For instance, the measured <cit.> hyperfine
constant B in the ^133Cs 6P_3/2 state is -0.4934(17) MHz, which differs from a more recent result <cit.> of, -0.5266(57) MHz by about 7%.
Because the uncertainty in B of Ref. <cit.> is smaller, we simply adopt the value Q=-3.55(4) mbarn therefrom.
Moreover, we find that the nuclear quadrupole contributions to the transition polarizabilities are suppressed compared to those due to the magnetic-dipole hyperfine interaction.
For the same reason, we neglect even higher-rank nuclear multipoles, such as the poorly known magnetic octupole moment <cit.>, due to their diminishing role as compared to the magnetic-dipole contribution.
To flesh out the tensorial structure of the transition polarizability resulting from the diagrams (<ref>), we use the same re-coupling angular momentum algebra technique as in our derivation of the second-order expressions <cit.>.
Since the HFI is a scalar ITO, the resulting tensorial structure of the transition polarizability is indeed given by Eq. (<ref>).
The hyperfine corrections to transition polarizabilities are therefore given by
δα^F_i → F_f =-fT^(0)+B^(0)+C^(0)+R^(0)i/√(3(2F_f+1)) ,
δβ^F_i → F_f =-fT^(1)+B^(1)+C^(1)+R^(1)i/√(2)fσi ,
δγ^F_i → F_f =fT^(2)+B^(2)+C^(2)+R^(2)i/f{I⊗I}^(2)i .
We remind the reader that the various transition polarizabilities entering Eq. (<ref>) are assembled as
X^F_i → F_f = X^[2] + δ X^F_i → F_f ,
where the second-order term X^[2] is F-independent. We listed our recommended values <cit.> for the second-order transition polarizabilities in Eqs. (<ref>).
The reduced matrix elements of individual diagrams entering Eqs. (<ref>) are given by
fT^(Q)i =∑_NJ_aJ_b(-1)^F_f-F_i+J_a+J_i[F_f,F_i,Q]^1/2
×F_fIJ_aNJ_fIQJ_iJ_aIF_fF_iQJ_iJ_aJ_b11
×{S_T^(J_aJ_bN)[fi]+(-1)^QS^(J_aJ_bN)_T[ff]} ,
fB^(Q)i =∑_NJ_aJ_b(-1)^J_i+J_b[F_f,F_i,Q]^1/2
×F_iIJ_bNJ_iIQJ_fJ_bIF_iF_fQJ_bJ_fJ_a11
×{S_B^(J_aJ_bN)[ii]+(-1)^QS_B^(J_aJ_bN)[fi]} ,
fC^(Q)i =
∑_NJ_aJ_b(-1)^J_a-J_i+F_i-F_f+N+1 [F_f,F_i,Q]^1/2
×∑_j [j] J_fJ_ijF_fF_iQIINJ_fJ_ij11QJ_aJ_bN
×{S_C^(J_aJ_bN)[ii]+(-1)^QS_C^(J_aJ_bN)[ff]} ,
fR^(Q)i =(-1)^2F_f-I+F_i+J_i+1[F_f,F_i,Q]^1/2
×QJ_fJ_iIF_iF_f∑_J_aQJ_iJ_fJ_a11
×{V[i]S_R^J_a[f]+(-1)^QV[f]S_R^J_a[i]} ,
which are expressed in terms of the reduced sums
S_T^(J_aJ_bN)[αβ] =∑_n_an_bI𝒩^(N)In_fJ_f𝒯^(N)n_aJ_an_aJ_aDn_bJ_bn_bJ_bDn_iJ_i/ΔE_αaΔE_βb ,
S_B^(J_aJ_bN)[αβ] =∑_n_an_bn_fJ_fDn_aJ_an_aJ_aDn_bJ_bI𝒩^(N)In_bJ_b𝒯^(N)n_iJ_i/ΔE_αaΔE_βb ,
S_C^(J_aJ_bN)[αβ] =∑_n_an_bn_fJ_fDn_aJ_aI𝒩^(N)In_aJ_a𝒯^(N)n_bJ_bn_bJ_bDn_iJ_i/ΔE_αaΔE_βb ,
S_R^(J_a)[α] = ∑_n_an_fJ_fDn_aJ_an_aJ_aDn_iJ_i/(Δ E_α a)^2 ,
and the HFI diagonal matrix elements
V[α]= (-1)^I+J_α+F_α∑_NF_αJ_αINIJ_α
×n_α J_α𝒯^(N)n_α J_αI𝒩^(N)I .
§ NUMERICAL RESULTS FOR HYPERFINE CORRECTIONS
In Sec. <ref>, we have presented the formulation for the hyperfine corrections to the scalar, vector and tensor transition polarizabilities. In this section, we our numerical results, which are compiled in Table <ref>.
To arrive at these values, we employed relativistic many-body methods for computing atomic structure. A detailed discussion of these methods and their numerical implementation can be found in Refs. <cit.> and references therein.
Simply put, we used the frozen-core V^N-1 Dirac-Hartree-Fock (DHF), Brueckner orbitals (BO), and random phase approximation (RPA). Among these approximations the RPA(BO) approach is the most complete as it incorporates the core polarization and the core screening effects. The RPA(BO) results are listed in Table <ref>. In our calculations, we use a dual-kinetic balance B-spline DHF basis set <cit.> containing N=60 basis functions of order k=9 per partial wave generated in a cavity of radius R_max=250 a.u., the same as in Refs. <cit.>.
To improve the accuracy of our calculations, we also employ a semi-empirical approach. To this end, we point out that there are three atomic properties entering the reduced sums: the energies, the E1 matrix elements, and the HFI matrix elements.
We therefore replace a certain subset of ab initio RPA(BO) quantities with the experimental or other high-accuracy values. Determining this subset, however, requires some care. Indeed, although the low-n orbitals from the finite basis set closely resemble those obtained with the conventional finite-difference technique computed with practically infinite cavity radius, as n increases, the mapping of the basis states to physical states deteriorates. In our basis set, we find the boundary for the transition from physical to non-physical orbitals to be at the radial quantum number n_r=12, without loss of numerical accuracy for matrix elements and energies. Because of this, while evaluating the reduced sums, we use
the NIST recommended <cit.> energies for the physical states, n_a, bP_J with n_a, b=6-12 and n_a, bD_J with n_a, b=5-11. For the same reasons, we replace the RPA(BO) E1 matrix elements for the 6S_1/2→ n_a, bP_J and 7S_1/2→ n_a, bP_J channels with their experimental values tabulated in Ref. <cit.> for n_a, b=6, 7 and with their high-accuracy relativistic coupled-cluster counterparts <cit.> for n_a, b=8-12.
The semi-empirical matrix elements of the hyperfine interaction involve “physical” states with principle quantum numbers 6≤n_a, b≤12. We evaluate them as follows.
The diagonal hyperfine matrix elements are extracted from the experimental values <cit.> of hyperfine constants A from the relation,
A=⟨𝒯^(1)⟩_J⟨𝒩^(1)⟩_I/(IJ) ,
where ⟨𝒯^(1)⟩_J and ⟨𝒩^(1)⟩_I
are the so-called stretched matrix elements expressed in terms of the reduced matrix elements
⟨𝒪^(N)⟩_J=[ J N J; -J 0 J ]γ J𝒪^(N)γ J .
The off-diagonal HFI matrix elements between the S_1/2 states
were evaluated as the geometric mean of the diagonal matrix elements <cit.>
⟨n'S_1/2|V^HFI|nS_1/2⟩ =⟨n'S_1/2|V^HFI|n'S_1/2⟩^1/2
×⟨nS_1/2|V^HFI|nS_1/2⟩^1/2 ,
where the diagonal matrix elements come from the experimental values of the hyperfine constant A. The high accuracy of this approximation has been confirmed in Ref. <cit.>. The remaining off-diagonal magnetic-dipole HFS matrix elements between the “physical” states were determined using the relativistic coupled-cluster method, with the code described in <cit.>. As for the nuclear quadrupole HFI contributions, we found them to be suppressed. Thereby, we kept their RPA(BO) matrix elements.
Our numerical results for the hyperfine corrections to transition polarizabilities are listed in Table <ref>.
Overall, the corrections to the polarizabilities are below the 10^-2 a.u. level.
The δα corrections are identically zero for the F_i ≠ F_f transitions due to the scalar nature of the underlying ITO. Otherwise,
|δα^F_i→ F_f| ∼ 5 × 10^-3 is about 5 orders of magnitude smaller than |α^[2]| ≈ 3 × 10^2. As to the vector transition polarizability, |δβ^F_i→ F_f| is about 4-5 orders of magnitude smaller than |β^[2]| ≈ 3 × 10^1. The |δβ^F_i→ F_f| corrections to the F_i=F_f transitions are an order of magnitude larger than those for the F_i≠ F_f transitions. We observe that the tensor transition polarizability, γ^F_i → F_f =δγ^F_i → F_f, is in the order of 10^-5 a.u. The relative smallness of the numerical values of the tensor transition polarizabilities as compared to their scalar and vector counterparts is due, in part, to the large values, ∼ 3× 10^2, of the prefactors
F_f{I⊗I}^(2)F_i in Eq. (<ref>).
Further, γ^3 → 4 = γ^4 → 3 as can be proven by a direct examination of our analytical expressions.
Finally, the difference between our RPA(BO) and semi-empirical estimates does not exceed 10%, which we take as the uncertainty of our results.
§ DISCUSSION
We have presented the theoretical formulation and numerical estimate for the hyperfine corrections to the transition polarizabilities. In this section, we investigate the impact of neglecting the hyperfine-mediated tensor polarizability γ and the hyperfine-state dependence of the scalar α, and vector β polarizabilities on the extraction of electroweak observables from APV experiments. In particular, we reanalyze two Boulder experiments <cit.> and
compute corrections to their extracted value of the ^133Cs anapole moment and the ratio α/β of the scalar and vector transition polarizabilities.
§.§ Reinterpretation of the Boulder parity violation measurement
We start by reviewing the Boulder APV experiment <cit.> and the assumptions that went into its analysis.
The experiment utilized the Stark interference technique to extract the ratio of the PNC amplitude to the vector transition polarizability, Im(E1_PNC)/β. Notice the use of β without specifying hyperfine components, as the hyperfine corrections were neglected. It is our goal to introduce F-dependent corrections to β here.
The Boulder experiment used a spin-polarized ^133Cs beam subjected to a uniform and static electric field, with a laser driving the nominally E1-forbidden transition between various hyperfine components of the ground 6S_1/2 and the excited 7S_1/2 state. The DC electric field opens up an E1 transition channel between these states by mixing the S and P states. The total transition rate R is determined by a combination of the Stark-induced, parity-violating (PNC), and M1 transition amplitudes
R=|A^Stark_i→f+A^PNC_i→f+A^M_1_i→f|^2 ,
where <cit.>
A^Stark_i→f =αĚ_L·Ě_S δ_F_fF_iδ_M_fM_i
+iβ(Ě_S×Ě_L)·⟨f|σ̌|i⟩,
A^PNC_i→f =iIm(E1_ PNC)ℰ̌_S·⟨f|σ̌|i⟩,
A^M1_i→f =(M1)_rad (ǩ̂_L×ℰ̌_S)·⟨f|σ̌|i⟩ .
Here we have changed the notation of Ref. <cit.> to be consistent with that of the previous sections.
In Eqs. (<ref>), Ě_L = ℰ_Lε̂̌̂ is the laser field driving the transition with ǩ̂_L being a unit vector in its propagation direction, ℰ̌_S=ℰ_Sě̂ is the DC electric field, and α and β are the scalar and vector transition polarizabilities, introduced in earlier sections. To set the stage, for now, as in Ref. <cit.>, we neglected the F-dependence of α and β and omitted the tensor (γ) contribution. The PNC amplitude E1_PNC includes both the nuclear spin-dependent and spin-independent effects and (M1)_rad stands for the radial integral of the 6S_1/2-7S_1/2 M1 matrix element <cit.>. We will neglect the A_i→f^M1 M1 amplitude for reasons discussed in Ref. <cit.>.
The Stark interference technique amplifies the feeble PNC amplitude A^PNC_i→f with the help of the much stronger A^Stark_i→f amplitude: the interference between A^PNC_i→f and A^Stark_i→f manifests itself as a cross term when expanding the square in the rate expression, Eq. (<ref>). To access this Stark-PNC interference term,
the experiment <cit.> involved measuring the change in the transition rate R, Eq. (<ref>), under various parity reversals, which included flipping
the direction of the applied DC electric field, flipping the sign of the relevant component of the laser polarization, or changing the sign of the magnetic quantum numbers <cit.>. The PNC amplitude was extracted from two transition rates, R^+ and R^- measured under opposite parities.
A parity reversal results in a sign flip of the PNC amplitude A^PNC_i→f, while leaving the sign of the Stark-induced amplitude A^Stark_i→f unaffected.
The Stark-induced amplitude A^Stark_i→f in Eq. (<ref>) generally depends on both the scalar and vector polarizabilities. However, in the Boulder experiment, the transitions were driven between the states of different values of F (F_i≠ F_f), and thereby, only the vector polarizability contribution remained in Eq. (<ref>). Therefore, it is the vector polarizability β that enters the interference term with E1_PNC.
Explicitly, the PNC amplitude was extracted from the normalized difference in the two transition rates,
R^+-R^-/R^++R^-∝Im(E1_PNC)/β .
Next, we specify the geometry of the Boulder experiment <cit.>. In the setup of the Boulder experiment, a ^133Cs atomic beam travels along the z-axis and an externally-applied magnetic field is aligned along the beam propagation direction, defining the quantization axis. Before entering the excitation-laser interaction region, the Cs atoms are optically pumped into the “stretched” hyperfine sub-levels of the 6S_1/2 ground states, either F_i=3, M_i=±3 or F_i=4, M_f=±4. The transitions to the 7S_1/2 hyperfine manifold are driven by a standing wave laser with the cavity axis aligned along the y-axis. The excitation laser field E_L is elliptically polarized, E_L=ℰ_L^zž̂+iℰ_L^Ix̂̌̂. Finally, a static and uniform electric field Ě_S=ℰ_S^xx̂̌̂ is aligned along the x-axis.
Having reviewed the Boulder experiment, now we examine the effect of our newly-introduced tensor transition polarizability γ, as well as the nuclear-spin-dependent corrections to α and β, and assess whether they affect the extraction of the PNC amplitude E1_PNC. To this end, we rewrite Eq. (<ref>) as
A_i→f^Stark =
α^F_i→ F_fE_L·E_Sδ_F_fF_iδ_M_fM_i
+iβ^F_i→ F_f(E_L×E_S)·⟨f|σ̌|i⟩
+γ^F_i→ F_fw_2( ε̂̌̂,ě̂) ℰ_Lℰ_S
f{I⊗I}^(2)i ,
where we have again used A_i→f^Stark = ℰ_Lℰ_S a_i→f.
The reduced matrix element f{I⊗I}^(2)i is again given by Eq. (<ref>) and the polarization and state-dependent factor is, explicitly (c.f. Eq. (<ref>))
w_2( ε̂̌̂,ě̂) =∑_M_Q=-2^2(-1)^M_Q+F_f-M_f
×F_f2F_i-M_f-M_QM_i(ε̂̌̂⊗ě̂)^(2)_M_Q .
The components of the rank-two compound tensor of electric field polarizations are
(ε̂̌̂⊗ě̂)^(2)_M_Q =
∑_μ,λ=-1^1 C^2M_Q_1μ1λε̂̌̂_μě̂_λ .
Note that the selection
rules for the 3j-symbol fix M_Q = M_i - M_f in Eq. (<ref>). Moreover, since we are interested in transitions between stretched hyperfine states |F,M_F=± F⟩ with F_i = F_f±1, only terms with M_Q = ± 1 survive in Eq. (<ref>).
For the Boulder experiment where ε̂̌̂ = ε_L^z ž̂ + i ε_L^I x̂̌̂ and
ě̂ = x̂̌̂, we find the needed components of the second-rank tensor to be (ε̂̌̂⊗ě̂)^(2)_± 1 = ∓1/2ε_L^z.
Then the Stark-induced amplitude for transitions between stretched states with F_i =F_f±1 can be simplified to
A^Stark_i→f =β^F_i → F_fℰ_L^zℰ_S^xC^F_fM_f_F_iM_f±1
±γ^F_i → F_fℰ^z_Lℰ^x_SU^F_fM_f_F_iM_f±1/2 ,
where ℰ_L^z and ℰ_S^x are the components of the laser and the applied DC electric fields, respectively. The coefficients C^F_fM_f_F_iM_f±1 are defined as
C^F_fM_f_F_iM_f±1 =(-1)^I+S+F_i+1/2√(3) [F_f,F_i]^1/2
×1/2F_fIF_i1/21F_f1F_i-M_f∓1M_f ±1 ,
and are tabulated in Ref. <cit.>. Here we introduced a similar coefficient,
U^F_fM_f_F_i M_f±1 =(-1)^F_f-M_fF_f2F_i-M_f∓1M_f±1
×⟨F_f||{I⊗I}^(2)||F_i⟩ ,
which specifies the dependence of the tensor contribution on the magnetic quantum numbers.
The “±” signs appearing in the C^F_fM_f_F_iM_f±1 and U^F_f M_f_F_iM_f±1 factors indicate the values of the magnetic quantum numbers for the initial state, given a fixed final state value of M_f. The “±” sign preceding the γ term originates from the rank-two compound tensor of the electric fields (ε̂̌̂⊗ě̂)^(2)_M_Q when the value of M_Q is changed from +1 to -1.
The values of the angular factors C^F_fM_f_F_iM_f±1 and U^F_fM_f_F_iM_f±1 relevant to our computation are, explicitly
C^4-4_3-3 =C^44_33=-C^33_44=-C^3-3_4-4=√(7/8) ,
U^4-4_3-3 =-U^44_33=U^33_44=-U^3-3_4-4=-42√(3) .
It is clear that these factors satisfy the following identities
C^F_fM_f_F_iM_f±1 =C^F_f-M_f_F_i-M_f∓1 ,
U^F_fM_f_F_iM_f±1 =-U^F_f-M_f_F_i-M_f∓1 .
The measured quantities <cit.> are the transition rates R^±, Eqs. (<ref>), whose computation involves squaring out the sum of the Stark and PNC transition amplitudes.
Our generalized Stark-induced amplitude is given by Eq. (<ref>). The
simplified PNC (<ref>) amplitude reads <cit.>
A^PNC_i→f
=∓lm(E1_PNC)ℰ_L^IC^F_fM_f_F_iM_f±1δ_M_i, M_f±1 ,
Note that while A^Stark_i→f depends on the z component of the laser field, A^PNC_i→f depends on ℰ_L^I=|ℰ_L^x|.
Then the generalized rates R^+ and R^- for the two transitions of opposite handedness are given by
R^+ ≡
R(F_i,M_f-1→F_f,M_f)
=β^2
(ℰ_S^x ℰ_L^z)^2
(C^F_fM_f_F_iM_f-1)^2
-βγℰ_S^x ℰ_L^z^2
C^F_fM_f_F_iM_f-1U^F_fM_f_F_iM_f-1
+2βIm(E1_ PNC)ℰ_S^xℰ_L^zℰ_L^I(C^F_fM_f_F_iM_f-1)^2
-γℰ_L^zℰ_S^xℰ_L^IIm(E1_ PNC)C_F_iM_f-1^F_fM_fU^F_fM_f_F_iM_f-1 ,
R^- ≡
R(F_i,M_f+1→F_f,M_f)
= β^2(ℰ_S^xℰ_L^z)^2
(C^F_fM_f_F_iM_f+1)^2
+βγ(ℰ_S^x ℰ_L^z)^2 C^F_fM_f_F_iM_f+1U^F_fM_f_F_iM_f+1
-2β Im(E1_ PNC)ℰ_S^xℰ_L^zℰ_L^I(C^F_fM_f_F_iM_f+1)^2
-γℰ_L^zℰ_S^xℰ_L^IIm(E1_ PNC)C_F_iM_f+1^F_fM_fU^F_fM_f_F_iM_f+1 .
In the above expressions, F_i and F_f remain fixed in R^±, while the sign of M_f flips when going from Eq. (<ref>) to Eq. (<ref>). We remind the reader that we focus on the transitions between stretched hyperfine states. For example, for the |3, 3⟩→|4, 4⟩ transition[Here we used the abbreviation |F_i, M_i⟩→|F_f, M_f⟩, suppressing the electronic term parts of the wave-functions.]
one would use the R^+ expression,
while the matching transition of opposite handedness would be
|3, -3⟩→|4, -4⟩ with the R^- expression to be used.
We will distinguish between four transition rates R^+_3→4, R^-_3→4, R^+_4→3, and R^-_4→3 referring to the transitions |3, 3⟩→|4, 4⟩, |3, -3⟩→|4, -4⟩, |4, -4⟩→|3, -3⟩, and |4, 4⟩→|3, 3⟩, respectively. For the sake of clarity, we have also suppressed the F_i → F_f superscripts in various polarizabilities.
Following Ref. <cit.>, we are interested in the rate ratio
r_F_i → F_f≡( R^+-R^-/R^++R^-)_F_i → F_f
as it separates out the PNC amplitude. With the help of Eq. (<ref>) and the identifies (<ref>), the rate ratio generalizes to
r_F_i → F_f =
1+ γ^F_i → F_f/2β^F_i → F_f U^F_fM_f_F_iM_f+1/C^F_fM_f_F_iM_f+1/1+γ^F_i → F_f/β^F_i → F_f U^F_fM_f_F_iM_f+1/C^F_fM_f_F_iM_f+12Im(E1_PNC^F_i → F_f)ℰ_L^I/β^F_i → F_fℰ_L^zℰ_S^x ,
where we have emphasized the nuclear-spin dependence of the PNC amplitude by reintroducing the F_i → F_f superscript into β and γ.
In the limit of vanishing tensor polarizabilities γ^F_i → F_f
and β being independent of F_i and F_f, Eq. (<ref>) reproduces the Boulder experiment's expression <cit.>
r_F_i → F_f^Boulder =
2 Im(E1_PNC^F_i → F_f)ℰ_L^I/β^[2]ℰ_L^zℰ_S^x .
From Eq. (<ref>), the ratios r_F_i → F_f for the F_i=3→F_f=4 and F_i=4→F_f=3 transitions are, explicitly
r_3→4 =2-12√(42)γ^3→4/β^3→4/1-12√(42)γ^3→4/β^3→ 4Im(E1_ PNC^3→4)ℰ^I_L/β^3→ 4ℰ_L^zℰ_S^x ,
r_4→3 =2+12√(42)γ^4→3/β^4→3/1+12√(42)γ^4→3/β^4→3Im(E1_ PNC^4→3)ℰ^I_L/β^4→3ℰ_L^zℰ_S^x ,
which can be further simplified to
r_3→4 ≈ r_3 → 4^Boulder(1-
δβ^3→ 4/β^[2] +6
√(42)γ^3→4/β^[2]) ,
r_4→3 ≈
r_4 → 3^Boulder(1
-
δβ^4→ 3/β^[2]
-6√(42)γ^4→3/β^[2]) ,
where the last two terms in the brackets are the F-dependent corrections to the Boulder expressions.
With our results from Table <ref>, the corrections evaluate to -5× 10^-5
and 3× 10^-5 for the 3→4 and the 4→3 transitions, respectively. These are smaller than the experimental uncertainties in the Im(E1_PNC^F_i→F_f) determination.
The PNC amplitudes E1_PNC include both the
nuclear-spin-independent and nuclear-spin-dependent contributions. The largest impact of our analysis is on the extraction of the nuclear-spin-dependent part (which includes the anapole moment contribution). If we neglect the hyperfine corrections to the transition polarizabilities,
the anapole moment contribution is extracted as <cit.>
Im(E1_PNC^anapole)^Boulder/β^[2]=
(r_3→4^Boulder-r_4→3^Boulder)ℰ_S^xℰ_L^z/2ℰ^I_L ,
where the authors of Ref. <cit.> associated the measured rates with r_F_i→ F_f^Boulder. The measured rates r_F_i → F_f are, however, more accurately given as in Eq. (<ref>).
To account for the nuclear-spin dependent effects on transition polarizabilities, we thus reexpress r^ Boulder_3→4 and r^ Boulder_4→3 in terms of r_3→4 and r_4→3 using Eqs. (<ref>) and use these “adjusted” Boulder rates in Eq. (<ref>). With our semi-empirical values from Table <ref>, we find r^ Boulder_3→4=1.00005 r_3→ 4 and r^ Boulder_4→3=0.99997 r_4→ 3, respectively, which cause the extracted value of Im(E1_PNC^4→3)/β^[2] to decrease by 3×10^-5 while Im(E1_PNC^3→4)/β^[2] to increase by 5×10^-5. Because both Im(E1_PNC^4→3)/β^[2], and Im(E1_PNC^3→4)/β^[2] were reported <cit.> at about 1.6 mV/cm, this means that the anapole contribution in our evaluation is slightly smaller, by about ∼1×10^-4 mV/cm. The reported <cit.> value of the anapole moment is 0.077(11) mV/cm so our correction of 1×10^-4 mV/cm is below the uncertainty. This suggests that the impact due to the spin-dependent effects on polarizabilities is negligible at the current level of experimental uncertainty.
§.§ The effect of hyperfine-mediated polarizabilities on the ratio analysis
We now turn our attention to the another Boulder experiment <cit.> which used the Stark-interference technique to determine the ratio of scalar and vector polarizabilities β/α in Cs[We note in passing that the authors of Ref. <cit.> refer to β as a “tensor” polarizability, while we call it “vector” to be consistent with the literature and to distinguish it from the true tensor γ contribution.].
This measured ratio is important in deducing the value of β through the more computationally reliable determination of α (see, e.g., Ref. <cit.> and the references therein). An accurate value of β is required for extracting the PNC amplitude from the APV measurement as described in Sec. <ref>.
In the β/α experiment <cit.>, the ^133Cs atoms were spin-polarized by an external magnetic field aligned along the y-axis. This magnetic field defined the quantization axis that is different from that of the APV experiment described in Sec. <ref>.
To simplify our analysis, we thus define a new coordinate system (x', y',z') obtained by a rotation from the (x, y, z) laboratory frame defined in Sec. <ref>. The unit vectors in this new system are related to those in the frame in Sec. <ref> as follows: ž̂'=ŷ̌̂, ŷ̌̂'=x̂̌̂, and x̂̌̂'=ž̂. This transformation aligns the quantization axis with ž̂' while preserving the handedness of the coordinate system. As a result, the electric fields in this new reference frame are given by ℰ̌_S'=ℰ_S^xŷ̌̂' and ℰ̌_L'=ℰ_L^zx̂̌̂'+iℰ^I_Lŷ̌̂'.
The ^133Cs atoms in the α/β experiment <cit.> underwent transitions from the initial 6S_1/2, F_i=3, M_i=3 state to the final 7S_1/2, F_f=3, M_f=3 state. This particular choice of states guarantees an nonvanishing contribution of the scalar polarizability to the Stark-induced amplitude, Eq. (<ref>). Then for the described experimental geometry, one has
A_i→f^Stark= iα^3→3 ℰ^I_Lℰ_S^x+iβ^3→3 ℰ_S^xℰ_L^zC^F_f, F_i_M_f,M_i
+ i K γ^3→3ℰ_S^xℰ^I_L ,
where K ≡ -i w_2( ε̂̌̂,ě̂) F_f=3{I⊗I^(2)}F_i=3. Explicitly, since F_f=3{I⊗I}^(2)F_i=3=-42√(35) (see Eq. (<ref>)) and w_2( ε̂̌̂,ě̂) =- (i/6)√(5/14), K = 35/√(2).
Here, the angular coefficient C^F_f F_i_M_f M_i is defined as C^F_fF_i_M_fM_i=g_F⟨M_F⟩ with the gyromagnetic ratio g_F=-1/4 and ⟨M_F⟩ being a population average over all the possible magnetic quantum numbers <cit.>.
In contrast to the APV experiment of Sec. <ref>, the parity reversal in the α/β experiment was effected by switching the laser polarization from the left to right
elliptical polarization, which is equivalent to reversing the sign of the ℰ^I_L in Eq. (<ref>).
This reversal flips the sign
of the scalar and tensor contributions, while preserving the sign of the vector term in Eq. (<ref>). It is clear that the interference term extracted in the experiment contains the combination
(α^3→3 +K γ^3→3) β^3→3. This means that we need to interpret
α/β→(α/β)_eff = α^3→3 +K γ^3→3/β^3→3 ,
as being the ratio measured by Ref. <cit.>.
To prove Eq. (<ref>), we recall that the experiment <cit.> employed a complementary modulation of the DC electric field strength synchronous
with the elliptical polarization reversals. Two Stark-induced rates were measured,
R^+ =|α^3→3ℰ^I_L+β^3→3ℰ_L^zC^F_f F_i_M_fM_i+K
γ^3→3ℰ^I_L
|^2
(ℰ_S,1^x)^2 ,
R^- =|α^3→3ℰ^I_L-β^3→3ℰ_L^zC^F_fF_i_M_fM_i+ K γ^3→3ℰ^I_L|^2
(ℰ_S,2^x)^2 ,
where ℰ_S,1^x and ℰ_S,2^x stand for the magnitudes of the two DC electric fields, whereas ℰ_L^z and ℰ_L^I≡Im(ℰ_L^x) are the magnitudes of the two components of the laser field driving the transition. The fields ℰ_S,1^x and ℰ_S,2^x were adjusted until there was no modulation of the rate signal under reversals of the laser field's polarization. This amounts to equating the two rates in Eqs. (<ref>), thus leading to
ℰ_S,2^x-ℰ_S,1^x/ℰ_S,2^x+ℰ_S,1^x=
β^3→3/α^3→3+
Kγ^3→3ℰ_L^z/ℰ^I_LC^F_fF_i_M_fM_i .
The inverse of the first factor on the r.h.s. of Eq. (<ref>),
was extracted using the above equation and identified as α/β in Ref. <cit.>. As mentioned above, the measured quantity is in fact (α/β)_eff=(α^3→3 +K γ^3→3) / β^3→3.
To the best of our knowledge, all the previous literature has identified the measured <cit.> α/β ratio with α^[2]/β^[2], neglecting the hyperfine corrections to transition polarizabilities. We extract this ratio from the measured value <cit.>
(α/β)_eff = -9.905(11) as
α^[2]/β^[2]≈(α/β)_eff( 1 -
δα^3→3/α^[2]
-
K γ^3→3/α^[2]
+
δβ^3→3/β^[2]) .
With the recommended values of α^[2] and β^[2] as in Eqs. (<ref>) and the hyperfine corrections from Table <ref>,
the corrective factor on the r.h.s of Eq. (<ref>) evaluates to (1+1.3 × 10^-4), equivalent to a ∼ 0.01% fractional correction to the value of α^[2]/β^[2].
The inclusion of the hyperfine correction thus
modifies the last significant digit of the reported result, leading to
α^[2]/β^[2] = -9.906(11) ,
but is below the 0.1% accuracy of the experiment <cit.>.
§ CONCLUSION
In summary, we have introduced and evaluated the hyperfine corrections to the polarizabilities, which include the non-vanishing tensor transition polarizability γ. These HFI-mediated effects lead to a slightly smaller anapole moment extracted from the measurements of atomic parity violation by the Boulder group <cit.>. However, our computed correction is insufficient to resolve the tension with the nuclear physics interpretation and data. We also showed that the effects of the tensor transition polarizability γ and hyperfine corrections to the scalar, α, and vector, β, transition polarizabilities are minor but not negligible for the determination of the α/β ratio from the measurements <cit.>. As the accuracy of experiments improves, our analysis should prove useful for interpretation of future measurements.
§ ACKNOWLEDGEMENTS
This work was supported in part by the U.S. National Science Foundation grants PHY-1912465 and PHY-2207546, by the Sara Louise Hartman endowed professorship in Physics, and by the Center for Fundamental Physics at Northwestern University.
apsrev4-2
|
http://arxiv.org/abs/2307.04123v1 | 20230709083214 | Towards cross-language prosody transfer for dialog | [
"Jonathan E. Avila",
"Nigel G. Ward"
] | cs.CL | [
"cs.CL"
] |
Bounced Model of Droplet on Moving Substrate
Chengwu Liu
August 12, 2023
============================================
Speech-to-speech translation systems today do not adequately support use for dialog
purposes. In particular, nuances of speaker intent and stance can be lost due to
improper prosody transfer. We present an exploration of what needs to be done to
overcome this. First, we developed a data collection protocol in which bilingual
speakers re-enact utterances from an earlier conversation in their other language, and
used this to collect an English-Spanish corpus, so far comprising 1871 matched
utterance pairs. Second, we developed a simple prosodic dissimilarity metric based on
Euclidean distance over a broad set of prosodic features. We then used these to
investigate cross-language prosodic differences, measure the likely utility of three
simple baseline models, and identify phenomena which will require more powerful
modeling. Our findings should inform future research on cross-language prosody and the
design of speech-to-speech translation systems capable of effective prosody transfer.
Index Terms: speech-to-speech translation, corpus, prosodic dissimilarity metric, English, Spanish
§ INTRODUCTION
Speech-to-speech translation systems are valuable tools for enabling cross-language
communication. While very useful today for short, transactional interactions, they are
less so for long-form conversation <cit.>. One reason is that, without
proper prosody transfer, translation systems are unable to reliably convey many intents
and stances, impeding users' ability to deepen their interpersonal relationships and
social inclusion. In dialog, prosody conveys pragmatic functions such as in
turn-taking, expressions of attitudes, and negotiating agreement. Regarding prosody,
current translation systems generally aim only to produce prosody that sounds natural,
but this is not always sufficient.
In traditional models, translation is done by a cascade of subsystems — for automatic
speech recognition, machine translation, and speech synthesis — and the intermediate
representations are just text, with all prosodic information lost. The prospect instead
of transferring the additional information provided by the source-language prosody was
a motivation for the development of unified, end-to-end models <cit.>. Despite
rapid recent
advances <cit.>,
the ability of such models to perform prosody transfer seems not to have been examined.
Rather, current approaches to prosody transfer handle it with specific
modules <cit.>. To date, these target only
specific functions of prosody, notably its roles in conveying paralinguistic/emotional
state, emphasis, and syntactic structure, and target only a few prosodic features,
notably F_0, pausing, and word duration. Very recent work has shown that this can
significantly improve perceived translation quality <cit.>, but also that
these techniques so far only close less than half of the perceived gap between default
prosody and the human reference. Clearly, something is still missing. This paper
investigates what that might be.
While one might hope that the answer could be found in the linguistics literature,
published knowledge of how prosody differs across languages focuses mostly on
syllable-level, lexical, and syntactic prosody. In particular, there is relatively
little work on differences in how prosody conveys pragmatic functions. Even for English
and Spanish, a well-studied pair, our knowledge is sparse beyond a few topics such as
turn-taking <cit.>, questions and
declaratives <cit.>, and expression of
certainty <cit.>. However, these certainly do not exhaust the
prosodic meanings important for dialog. Further, these studies have been mostly limited
to differences in intonation and duration, leaving out most prosodic features.
Accordingly, this paper takes a fresh look, using a corpus-based approach.
§ PROTOCOL AND CORPUS
To investigate prosodic differences in dialog, we need a suitable cross-language
corpus. However, corpora for speech-to-speech translation today primarily comprise
monologues, derived from readings <cit.>,
political discussions <cit.>, or informative
talks <cit.>. Those comprising dialogs were derived
from television show dubs <cit.>, lectures and press
conferences <cit.>, or speech synthesis <cit.>. Speech
collected in these settings lacks interactivity, spontaneity, and most of the prosodic
variation found in real dialog.
We accordingly developed the Dialogs Re-enacted Across Languages (DRAL) protocol. This
involves pairs of nonprofessional, bilingual participants. They first have a ten-minute
conversation, which we record. These conversations are unscripted, although we
sometimes suggest topics, which allows for pragmatic diversity and spontaneous
interactions. Depending on their relationship, the participants mostly get to know each
other, catch up on recent happenings, and/or share personal experiences. Subsequently,
under the direction of a producer, they select an utterance or exchange and closely
re-enact it in their other language, which may take several attempts to get right. They
then re-enact another utterance. The yield is typically a few dozen matched pairs per
one-hour session, with overall good pragmatic diversity, as suggested by
Table <ref>. Our design choices and the DRAL corpus are discussed
further in our technical report <cit.>.
Following this protocol we have so far collected matched EN-ES
utterance pairs, from a total of 42 speakers. The latest release, including source
recordings and metadata, is available at <https://cs.utep.edu/nigel/dral/>. In the
following explorations, we use the first 1139 matched “short” utterances, which each
feature a single interlocutor. The average duration is 2.5 seconds.
§ UTTERANCE PROSODY REPRESENTATION
As our aim here is exploratory, we chose to work with simple, explicit, interpretable
representations of prosody. We use the Midlevel Prosodic Features
Toolkit[<https://github.com/nigelgward/midlevel>], as its features were
designed to be robust for dialog data, generally perceptually relevant, and normalized
per speaker. From the available features, we selected ten based on previous utility for
many tasks for several languages <cit.>, specifically: intensity, lengthening,
creakiness, speaking rate, pitch highness, pitch lowness, pitch wideness, pitch
narrowness, peak disalignment (mostly late peak), and cepstral peak prominence smoothed
(CPPS), the latter an inverse proxy for breathy voice. This rich set of prosodic
features supports more comprehensive analyses than most prosody research efforts.
To characterize the prosody of an utterance, each base feature is computed over ten
non-overlapping windows, together spanning the whole utterance. Thus, each utterance is
represented by 100 features. The window sizes are proportional to an utterance's
duration and span fixed percentages of its duration: 0–5%, 5–10%, 10–20%,
20–30%, 30–50%, 50–70%, 70–80%, 80–90%, 90–95%, 95–100%, as seen in
Figure <ref>. This representation is thus not aligned to
either syllables or words, but is appropriate for representing the sorts of overall
levels and contours that are most often associated with pragmatic functions.
Normalization occurs at two steps in the feature computation. The low-level
(frame-level) features — pitch, energy, and CPPS — are normalized per track to
mitigate individual differences. Subsequently, the mid-level features (peak
disalignment, lengthening, etc.) are computed over each specified span for every
utterance, and after being computed for all utterances in a track, each is
z-normalized.
§ CROSS-LANGUAGE FEATURE CORRELATIONS
For our first glimpse at the EN-ES prosody mapping, we examined the Spearman
correlations between the 100 EN prosodic features and the 100 ES prosodic features,
across all matched pairs. (We computed Spearman correlations as well within each
language for comparison.) Were EN and ES prosodically identical, we would expect each
EN feature to correlate perfectly with its ES counterpart. In fact, the correlations
were far more modest but always positive and often substantial: more than half the
features sharing the base feature and span have correlation ρ≥0.3. Thus,
overall, EN and ES prosody is quite similar, and pitch highness is generally the most
similar, especially towards the middle of utterances (e.g. 30–50%, ρ=0.59).
While some features, such as pitch highness, have much stronger span-for-span
correlations, other features, notably speaking rate, lengthening, and CPPS, have
correlations that are strong throughout the utterances. For example, speaking rate at
every span in an EN utterance correlates with speaking rate at every span in the
corresponding ES utterance. These findings are compatible with the idea that English
and Spanish prosody is overall roughly similar, but that the locations of local
prosodic events can vary, likely due to differences in word order and lexical accents.
However, some correlations were much weaker. The lowest cross-language correlations for
the same features were for creakiness and peak disalignment, suggesting that these are
likely to have different functions in the two languages. There were also many
off-diagonal correlations. Most of these were unsurprising, such as the
anticorrelations between the speaking rate and lengthening features, but not all. For
example, intensity at the end of an EN utterance correlates with CPPS throughout an ES
utterance (EN 90–95% vs. ES 5–20%, 30–70%, and 80–100%, ρ≥0.3), while
no such relationship was found within either language. Examination of the ten pairs
that most closely reflect this pattern (EN high near final intensity and ES high CPPS),
showed that in half the speaker is preparing a follow-up explanation. Thus, we have
identified a pragmatic function that seems to be prosodically marked differently in EN
and ES. Figure <ref> shows the values for these two
features for one such pair.
§ PROSODIC DISSIMILARITY METRIC
To judge the quality of prosody transfer, we need a measure of how far the predicted
prosody diverges from the observed prosody in the human reference translation. If there
existed a synthesizer capable of realizing arbitrary prosodic specifications, we could
just use it and then use human perceptions of the match between the synthesized and
reference speech. However, no existing synthesizer is capable of this, especially for
the rich set of prosodic features we are investigating here. Existing metrics for
estimating similarity from prosodic feature representations exist, such
as <cit.> and <cit.>, but these again are limited in the prosodic
features considered.
Accordingly, we propose a new simple metric. This estimates the dissimilarity of two
utterances as the Euclidean distance between their respective prosodic representations,
as computed in Section <ref>, with all features given equal
weight.
We do not expect this metric to accurately match human perceptions, but we can hope
that it might be useful as a first-pass metric for judging prosodic dissimilarity. To
gauge this, we compared its outputs to our perceptions of a few dozen within-language
utterance pairs. To structure this process, we wrote software to randomly select an
utterance (the “anchor”) from the data and retrieve the four most similar utterances
and four most dissimilar utterances according to the metric. Ideally, perhaps, we would
have made holistic judgments of the degree of prosodic similarity between each
sample-anchor pair, but, probably like most people, we lack this ability. Instead, we
repeatedly listened and identified whatever similarities and dissimilarities we could
note, taking 2 or 3 minutes per pair to do so. The most salient of these were always at
the level of pragmatic function, rather than prosodic features, but we considered this
unproblematic, as the ultimate aim of prosody transfer is pragmatic fidelity, not
prosodic fidelity. We did this process for seven anchors and eight comparisons
utterances each, all from the English half of the data.
We found, first, that the metric captures many aspects of pragmatic similarity —
including speaker confidence, revisiting unpleasant experiences, discussing plans,
describing sequences of events, and describing personal feelings — all of which were
generally also prosodically similar. Table <ref> shows one set of
utterances to illustrate. The prosody of this anchor utterance suggested that the topic
is personal feelings: a slow then fast then slow speaking rate, a pause, and occasional
use of creaky voice. Each of the utterances rated similar by the metric shared these
qualities, albeit to varying degrees.
Second, we noted that the similarities found were not generally lexically governed.
While some words and syntactic structures have characteristic prosody, and some of the
pairs considered similar by the metric shared lexical content, such as music in
the fourth and fifth examples in Table <ref>, generally prosodic
similarity seemed to be orthogonal to lexical similarity.
Third, we noted that the metric does not always appear to match perceptions. To try to
understand its limitations and what needs improving, we examined examples where our
judgments diverged most from the metric's estimates, namely four which the metric
judged very similar but sounded rather different to us, including EN_025_1 in
Table <ref>, and two which we felt had significant similarities but
which the metric judged very different, including EN_024_1 in
Table <ref>. Of these, two pairs had very salient nasality
differences, which our model does not capture, and sounded very different in terms of
pragmatic function, specifically relating to the presumption of common ground. For
three pairs the problem seemed to be differences in syllable-aligned pitch and energy
contours, which are not directly represented by our features. However, for 50 of the 56
pairs examined, our judgments aligned with those of the model.
Thus, while the metric needs improving, overall we deemed it likely to be useful. We
consider these findings also to be evidence that our prosody representation is
meaningful. Accordingly, below we rely on both for evaluating the quality of prosody
transfer, as a way to obtain insight.
§ COMPARISON OF MODELING STRATEGIES
Our corpus and metric enable the evaluation of different models of the cross-language
prosody mappings. The task is, given the prosody of an utterance in the source
language, to predict the prosody of its translation in the target language. The error
is the dissimilarity between the inferred prosody and the prosody of the human
re-enactment. We here report the results for models in both directions, EN→ES and ES→EN, using the partition described in Table <ref>.
The first model is intended to represent the best that can be achieved with a typical
cascaded speech-to-speech model, with a synthesizer that operates in ignorance of the
input-utterance prosody. Our implementation relies on the lookup of the human-generated
translation in the target language, to avoid the impact of ASR or MT errors. We use
Whisper <cit.> to transcribe this to a word sequence with punctuation and
then use Coqui TTS[<https://github.com/coqui-ai/TTS>] to synthesize speech
from that transcription. To ensure a fair comparison, utterances incorrectly
transcribed were excluded from the data. Table <ref> reflects the 252
excluded utterances. To judge the quality of each output, we compute a representation
of the prosody of the synthesized speech using the method of
Section <ref>.
The second model predicts the prosody of the translation to be identical to the prosody
of the input: it trivially outputs the same representation. This “naive” model
embodies a strategy of directly transferring the input prosody.
The third model is trained by linear regression. Thus, each feature of the target
prosody representation is predicted as a linear function of the 100 features of the
input utterance.
Table <ref> shows the three models' overall average error. The
synthesizer baseline is outperformed by the naive baseline, suggesting that keeping the
same prosody in translation may be a reasonable basic strategy. The naive baseline is
in turn outperformed by the linear regression model, suggesting that even a simple
model can learn some aspects of the mapping between English and Spanish prosody.
While our simple linear model shows a benefit, its prediction error is still very high.
We think the likely factors include not only the existence of mappings too complex for
a linear model, but also the small size of the training data, the existence of free
variation implying a permissible margin of error for our metric, unmodeled dependencies
of target-language prosody on the source-utterance context and its lexical content, and
speaker-specific prosody behavior tendencies.
§ QUALITATIVE ANALYSIS
To better understand the challenges of cross-language prosody modeling, we examined
examples where the various models did well or poorly.
First, we examined the 16 examples in each direction whose synthesized prosody was
least similar to the human-produced target. The most common and salient differences
were: failure to lengthen vowels and vary the speaking rate for utterances where
speakers are thinking or expressing uncertainty or hesitation, failure to change pitch
at turn ends, and generally sounding read or rehearsed and thus unnatural for
conversational speech.
Next, we exampled the 16 pairs for which the naive model did worse, that is, the cases
where the English and Spanish prosody diverged most. Often there were salient
differences, in a few common patterns, such as ES utterances being creakier than the
English, EN but not ES utterances ending with rising pitch, and EN utterances being
breathier in some regions. The latter two may reflect the common use of uptalk in
English, that is to say, the use of breathy voice and rising pitch to establish common
ground regarding a referent <cit.>, a pattern rare in the Spanish dialect
of our corpus. In other cases there were no highly salient differences; presumably,
these had multiple smaller differences which added up to a big difference according to
the metric.
Next, we examined the examples where the linear regression model provided the most
improvement relative to the naive baseline; unsurprisingly these were often cases where
it corrected for the divergences mentioned above.
Finally, we examined the highest-magnitude coefficients of the linear model. Most were
unsurprising and reflected correlations noted above. However, among the top three,
there was a –.32 coefficient relating EN lengthening over 5%–10% to ES CPPS over
0%–5%. This may reflect the tendency for EN speakers to start turns with fast speech
(low lengthening) but not ES speakers <cit.>, who perhaps tend instead to
start turns with more harmonic (higher CPPS) speech.
§ IMPLICATIONS AND FUTURE WORK
As we expected, these investigations indicate that effective cross-language transfer
will require attention to prosodic features beyond pitch and duration. These include at
least breathy voice, creaky voice, and intensity. We also found that the prosody of
some pragmatic functions, as they occur in dialog, differs in previously unsuspected
ways across languages. These include at least grounding, getting personal, leading into
something, and taking the turn. These findings suggest that well-designed prosody
transfer techniques will be important for effective speech-to-speech translation.
Finally, our results indicate that doing so has the potential to convey many more
pragmatic functions and intents that have been previously managed.
These investigations relied on a small corpus, a non-comprehensive prosody
representation, and a crude metric. The fact that these enabled us to obtain
interesting findings, is evidence for their utility. At the same time, all of these
need extensions and improvements, and doing so would enable future work to produce a
clearer and broader picture of what prosody is conveying in the two languages, how it
does it, and what the differences are.
In addition to such basic research, we envisage our findings informing the design of
speech-to-speech translation systems, potentially via two paths. In one path, for
end-to-end models, an improved version of our dissimilarity metric, properly extended
and tuned to model human perceptions, could serve as the loss function for training. In
the other path, for cascaded models, our analysis techniques could inform the design of
a specific prosody-transfer module, and inspire the development of synthesizers capable
of following a rich prosody specification and thereby conveying a wide range of
pragmatic functions. Given the unavoidable high cost and consequent low volume of
matched conversation data, either approach will mostly likely need to exploit
per-language or joint self-supervised training techniques.
We share all our data, code, and observations at our public repository:
<https://github.com/joneavila/DRAL>.
§ ACKNOWLEDGEMENTS
We thank Emilia Rivas for assistance with the data collection, Ann Lee, Benjamin
Peloquin, and Justine Kao for discussions, and UTEP URI for internal funding.
IEEEtran
|
http://arxiv.org/abs/2307.04600v2 | 20230710143905 | Mass-stream trajectories with non-synchronously rotating donors | [
"David Hendriks",
"Robert Izzard"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
firstpage–lastpage
InPars Toolkit: A Unified and Reproducible Synthetic Data Generation Pipeline for Neural Information Retrieval
Rodrigo Nogueira
February 2023
==============================================================================================================
Mass-transfer interactions in binary stars can lead to accretion
disk formation, mass loss from the system and spin-up of the
accretor. To determine the trajectory of the mass-transfer stream,
and whether it directly impacts the accretor, or forms an accretion
disk, requires numerical simulations. The mass-transfer stream is
approximately ballistic, and analytic approximations based on such
trajectories are used in many binary population synthesis codes as
well as in detailed stellar evolution codes.
We use binary population synthesis to explore the conditions under
which mass transfer takes place. We then solve the reduced
three-body equations to compute the trajectory of a particle in the
stream for systems with varying system mass ratio, donor
synchronicity and initial stream velocity.
Our results show that on average both more mass and more time is
spent during mass transfer from a sub-synchronous donor than from a
synchronous donor.
Moreover, we find that at low initial stream velocity the
asynchronous rotation of the donor leads to self-accretion over a
large range of mass ratios, especially for super-synchronous
donors. The stream (self-)intersects in a narrow region of parameter
space where it transitions between accreting onto the donor or the
accretor.
Increasing the initial stream velocity leads to larger areas of the
parameter space where the stream accretes onto the accretor, but
also more (self-)intersection. The radii of closest approach
generally increase, but the range of specific angular momenta that
these trajectories carry at the radius of closest approach gets
broader.
Our results are made publicly available.
binaries: close – stars: mass-loss – accretion
§ INTRODUCTION
Binary stellar systems are ubiquitous and the proximity of a star to a
companion introduces a variety of interactions. These interactions
lead to a range of phenomena like the stripping of the outer envelope
of a star and the transfer of mass and angular momentum
<cit.>, tidal interactions
<cit.>, the formation and evolution
of accretion disks <cit.>,
accretion induced supernovae <cit.>,
the (high velocity) ejection of companions
<cit.>, quasi chemically-homogeneous
evolution <cit.>, Be stars
<cit.>,
and circumbinary disk formation <cit.>.
A comprehensive review of these binary interactions is given in
<cit.>, but the most relevant
interactions to the current study are the transfer of mass and tidal
interactions between the stars. Both these interchange orbital and
rotational angular momentum of the system and the stars. Tidal
interactions circularise the orbit, i.e. reduce the eccentricity, and
synchronise the stars, i.e. force the stellar rotation rate to equal
the orbital rotation rate. Mass transfer, among other effects,
de-synchronises the stars by angular momentum transfer from the donor
to the accretor.
In a semi-detached system, where the accretor significantly underfills
its Roche-Lobe, the mass transfer process can be split into three main
parts: The ejection from the donor, the flight of the particles in the
potential between the stars, and the accretion onto the accretor
<cit.>. During the flight stage,
the gravitational interaction between the binary system and the
particle leads to a torque and subsequent exchange of angular momentum
between the binary system and the particle. It is during this stage
that the final outcome of the trajectory is determined, i.e. accretion
onto the companion star, accretion back onto the donor star or loss
from the system entirely. The flight stage is approximately ballistic,
and it is the stage that we focus on in this study.
The potential that is used to calculate when and how mass is
transferred from one star to the other is often calculated under the
assumption that the orbit of the binary system is circular and that
the donor rotates synchronously with the orbit
<cit.>. Together with the
approximation that the stars are point particles this setup is often
called the Roche potential (fig:schematic_overview_frame).
The points in this potential where accelerations vanish are called
Lagrange points. The first Lagrange point lies on the critical
equipotential surface and is located between the two stars. While
generalisations of the equipotential surface and the inclusion of
additional physical effects have been studied, binary
stellar-evolution codes often still use simplified analytical formulae
for the mass stream properties based on circular and synchronous
systems.
Some examples of extensions to this simple Roche model that relax some
assumptions, or add additional physics, are those that allow the
asynchronous rotation of the donor with respect to the orbital
rotation <cit.>,
eccentric orbits <cit.>, spin-orbit
misalignment <cit.>, effects of external radiation
<cit.> or combinations of these
<cit.>. These extensions
change the shape of the critical surface and the location of the
Lagrange points, most notably the first.
Asynchronous rotation of the donor induces time-dependent tides,
exerted by the companion, which then affect the potential. The
above-mentioned extensions that take the asynchronous rotation of the
donor into account <cit.> rest
on several assumptions. First, the shape of the donor is assumed to
conform instantaneously to the shape dictated by the
potential. Secondly, the motion of mass in the donor is assumed to
move primarily along the axis of rotation (i.e. primarily zonal,
instead of meridional). These two assumptions are called the first
approximation <cit.>. Asynchronous rotation
of the donor can occur due to, e.g., rapid expansion of the donor star
leading to sub-synchronous rotation when it fills its Roche Lobe.
Given L1 it is possible to calculate the initial conditions and
subsequent trajectory of the mass flow away from the donor star.
<cit.> analyse the behaviour of
donor material at L1 and the trajectory of the stream of matter
flowing from L1 to the accretor. Their perturbative analysis provides
mass-transfer stream properties over a range of orbital configurations
of the binary based on ballistic trajectories of particles in the
Roche potential. Critical to the study of
<cit.> are the assumptions that the
donor rotates synchronously with the orbit, that the stream at L1 has
a low thermal-velocity (cold) of compared to the orbital velocity,
that the gas remains isothermal throughout the flow, and that the mass
contained in the stream is negligible compared to the total mass of
the system. <cit.> provide
analytical fits to this data and study the response of the accretor
when the mass-transfer stream either directly impacts the accretor or
misses the accretor and forms an accretion
disk. <cit.> calculates properties of the mass
transfer in non-synchronous rotating donors, including the effects of
kinematic acceleration due to the bulging motion of the donor star as
a result of its non-synchronicity. <cit.>
and <cit.> study the effect of initial
thermal-velocity of the stream particles on the location of hotspots
in cataclysmic variable
systems. <cit.> and
<cit.>
calculate the ballistic trajectories to include in their osculating
orbit calculations and consider asynchronous rotating donors. They do
not make the results of these calculations public, however.
The aim of our paper is to publicly release interpolation tables that
contain the results of our ballistic-stream trajectories calculations
over a wide range of mass ratios and degrees of asynchronicity of the
donor, as well as mass-stream surface areas and initial thermal
velocities at L1. These can be used in combination with osculating
orbit calculations <cit.>, and as tables in stellar
evolution codes like
<cit.> and population synthesis
codes like <cit.> or
<cit.>.
Our paper is structured as follows. In sec:theory we explain
the theoretical basis of our project, and in sec:Method we
lay out the methods used to calculate our ballistic trajectories and
our approach to dataset interpolation. In sec:results we show
the results of our ballistic trajectory calculations for several
initial properties of the mass transfer stream. We discuss and
conclude in Sections <ref>
and <ref>. sec:fiduc-source-distr provides a
description of our interpolation datasets, and
sec:lagrange-point-plot contains a visual overview of the
first three Lagrange point locations in two different frames of
reference.
§ THEORY
In this section we lay out the theoretical basis of the calculations
of the trajectory of a particle flowing through L1. We first determine
the potential that the particle experiences when attached to the donor
star and when moving freely through the system, and we then determine
the cross-sectional surface area of the stream and the initial
velocity of the particles L1.
§.§ Generalised Roche potential and Lagrange points
To calculate the particle trajectory through the potential of the
binary system, we consider the reduced three-body problem in a
Cartesian coordinate system Oxyz in the co-rotating frame of the
binary, which rotates with angular frequency ω, with the origin
O of the frame of reference located on the centre of mass of the
system <cit.>. The x-coordinate
is defined parallel to the line connecting the centres of the stars,
the y-coordinate defined perpendicular to the x-coordinate and in
the plane of the orbit and the z-coordinate perpendicular to the
orbital plane. Throughout our calculations we consider particle motion
only in the plane of the orbit, i.e. z = 0.
The donor and accretor are regarded as point masses,
M_don and M_acc, with their positions fixed
at x_don = [-μ_acc, 0] and
x_acc = [1-μ_acc, 0]
respectively, where
μ_acc = M_acc/(M_don +
M_acc), and =
M_acc/M_don. Our units of length, time,
velocity, and potential are the semi-major axis a, the inverse
orbital frequency ω^-1, the orbital velocity aω, and
a^2ω^2 respectively, unless otherwise indicated.
A particle freely moving in a binary star system in a co-rotating
frame experiences the gravitational potential of both stars, and a
centrifugal potential due to the co-rotation, and a Coriolis force due
to movement relative to the co-rotating frame. When we assume that
both stars are centrally condensed, i.e. the Roche model, the
potential is,
Φ(x,y) =
-μ_acc/[(x-1+μ_acc)^2 +
y^2]^1/2 - 1-μ_acc/[(x+μ_acc)^2
+ y^2]^1/2
- 1/2(x^2+y^2).
This is valid for a freely moving particle, i.e. not inside either
star, because there is no other force acting on the particle. This
potential is also valid to calculate the critical surface beyond which
mass starts flowing away from the donor, in the case the donor rotates
synchronously with the orbit and its rotation is along an axis
parallel to the orbital rotation. We show an example of the Roche
potential in fig:schematic_overview_frame.
To calculate the location at which mass starts flowing from the donor
we need to find the critical surface of the donor, i.e. the last
surface at which the net inward force of the potential is balanced by
the pressure of the star. We assume the rotation of the donor is in
the same direction as the orbit of the binary system, the dynamic
timescale is shorter than the tidal timescale, and that the orbit is
circular, in the rest of our study. The potential felt by a
non-synchronously rotating donor is
Φ_don(x,y, ) = - μ_acc/[(x-1+μ_acc)^2 + y^2]^1/2
-1-μ_acc/[(x+μ_acc)^2
+ y^2]^1/2 -1/2^2(x^2+y^2) -
(^2-1)μ x.
Here the potential acting on the donor depends on the synchronicity
factor,
= Ω_don / ω,
where Ω_don is the rotation rate of the donor.
We calculate the location of the first three Lagrange points of the
donor, determining the critical equipotential surface, by taking the
derivative of the potential in eq:roche_potential_COM_don with
respect to x and setting y = 0,
dΦ_don(y=0)/dx = (1 - μ_acc)/(μ_acc + x)^2 + μ_acc/(μ_acc + x - 1)^2
- ^2 x - μ_acc(^2 - 1) =
0.
We solve this equation for x which gives the first three Lagrange
points. In sec:lagrange-point-plot we show these points for a
selection of .
In the potential acting on particles in the donor
(eq:roche_potential_COM_don) we assume that the dynamical
timescale of the donor is much shorter than the timescale of the tides
induced by the secondary star and the non-synchronous rotation of the
donor <cit.>, and thus the potential
is approximately static. We express the validity of this approximation
as
η_static = P_orb/τ_dyn, donα(e, f, ν)≫ 1,
where P_orb is the orbital period of the system,
τ_dyn, don = √(R^3/2GM_don) is the
dynamical timescale of the donor where R is its radius and
M_don is its mass, G is the gravitational constant, and
α(f, e=0, ν=0) = |1-|
is generally a function of synchronicity , eccentricity e
and mean anomaly ν, but here we focus on circular systems
(i.e. e=0, ν is irrelevant)
<cit.>.
α = τ_tideω/ 2π captures the timescale,
τ_tide, on which tides induced by asynchronous
rotation operate. If η_static≫ 1, the response of
the donor to a change in the potential is much faster than the
timescale of the tides induced by the asynchronous rotation of the
donor. The potential can then be regarded as static.
§.§ Mass-stream particle properties
In this section we describe the relevant properties of the particles
in the mass stream at and around the first Lagrange point, L1.
§.§.§ Thermal velocity of stream particles at L1
The initial velocity with which material flows through L1 is set by
the thermal velocity of the material at L1
<cit.>. The thermal velocity, ,
depends on the properties of the photosphere of the donor,
= ṽ_thermal/aω = √(3kT_eff, don/m)1/aω= √(3kT_eff, don/μ_phot, don m_a)1/aω,
where ṽ_thermal is the dimensionful thermal
velocity, k is the Boltzmann constant, T_eff, don is
the effective temperature of the donor, m and
μ_phot, don are the average mass and the mean
molecular weight of the particles in the photosphere respectively,
m_a is the atomic mass unit, a is the semi-major axis
of the system and ω is the orbital frequency of the
system. Here we have assumed the equation of state behaves like an
ideal gas.
§.§.§ Stream surface area at L1
The mass-transfer stream at L1 has a non-zero surface area such that
particles are distributed around L1. We calculate the surface area of
the stream, A_stream
<cit.>, assuming a circular cross-section,
as,
A_stream = Ã_stream/a^2 =
2π k T_eff, don/μ_phot, don m_a/μ_accω^2
×{g()[g()-(1+)^2]}^-1/21/a^2,
where Ã_stream is the dimensionful stream area,
is the synchronicity factor
(eq:synchronicity_factor). The geometric factor, g(),
is,
g() = 1/d_L1, don^3 + /(1-d_L1, don)^3,
where d_L1, don is the distance from the centre of the
donor to L1 in terms of the separation of the binary system.
We reformulate the mass stream area in terms of the thermal-velocity
of the particle at L1, as,
A_stream =
2π/3 ^2(1+)
×{g()[g()-(1+)^2]}^-1/2.
fig:combined_area_plot (a) shows the stream diameter,
, as a function of the thermal-velocity, ,
mass ratio and synchronicity factor. The solid line indicates
= 1 and = 1, and the grey transparent area indicates
the extent of diameters spanned by the ranges of and
. At fixed thermal-velocity, the extent of stream diameters
spans about a factor of 3-4, and from ⪆ 0.06
the diameter of the stream reaches a significant fraction
(⪆ 0.1) of the separation of the
system. fig:combined_area_plot (b) shows the ratio of the
stream diameter and the thermal-velocity as a function of mass ratio
and . Overall, in most of the parameter space this
ratio does not exceed ≈ 0.7, except for > 1.1 and
< 10^-1. Only in the extreme case of ⪆ 1.5
and ≈ 10^-2 does the ratio exceed unity, indicating
that for most of the parameter space, the stream diameter is close to
that of the case of a synchronous and equal mass-ratio system.
The density distribution in the stream at L1 is approximately Gaussian
<cit.>,
ξ(l̃) = η e^-l̃^ 2/2σ^2,
where reduced position offset |l̃| < 1 and position offset
l = l̃ √(A/π) σ = 0.4 such that at
l = ± 1 the density equals that of the photosphere of the donor
<cit.>, and,
η = 1/∫_-1^1ξ(l̃) dl̃.
In a given system with , and , we calculate
trajectories with N_A stream equally-spaced initial
positions relative to L1, sampled in the range [-/2,
/2], and weigh each according to
eq:stream_density_distribution. We use these trajectories to
calculate averaged quantities.
§ METHOD
In this section we explain how we calculate the trajectory of a
particle and how we classify its trajectory in the potential of
sec:roche-potent-reduc, as well how we calculate the relevant
properties of mass transfer in a binary population.
§.§ Particle trajectories in the Roche potential
In the following sections we explain our method of calculating the
trajectory of particles in the Roche potential.
§.§.§ Reduced three-body equations and ballistic integration
The trajectory of a particle is found by integrating the equations of
motion of the particle in the rotating frame,
ẍ = -∂Φ/∂ x + 2 ẏ
and
ÿ = -∂Φ/∂ y - 2 ẋ,
where x and y are the position components of the particle with
respect to the centre of mass of the binary system, ẋ and
ẏ the velocity components of the particle, ẍ and
ÿ the acceleration components of the particle, and Φ is
the potential experienced by the particle
(eq:roche_potential_COM_particle). The first terms in
equations <ref>
and <ref> are the gradient of the potential
and the second terms are the Coriolis force in each direction.
We calculate the specific energy and angular momentum of the particle
in the inertial frame, with respect to the centre of mass, using
quantities defined in the co-rotating frame,
ε = Φ + 1/2(ẋ^2 + ẏ^2) + x^2 + y^2 + xẏ-ẋy
and
h = x^2 + y^2 + xẏ-ẋy,
respectively, in units a^2ω^2 and a^2ω.
In the circular reduced three-body problem, the only first integral of
motion is the Jacobi constant <cit.>,
C = Φ + 1/2(ẋ^2 + ẏ^2) = ε-h,
which is the difference between the energy and the angular momentum of
the particle with respect to the observer frame. We use the Jacobi
constant to determine the accuracy of our calculations.
§.§.§ Initial position and velocity
We integrate trajectories from a given initial position,
x_i, relative to L1 and initial velocity,
v_i, relative to the co-rotating frame.
The initial position is,
x_i = x_minor offset + x_stream area offset.
Here x_minor offset = [δ x, 0] is a
minor offset to prevent the particle starting exactly on L1, where
δ x = |x_L_1-x_acc|/100,
x_acc is the position of the accretor, and
x_L_1 is the x-coordinate of L1, and
x_stream area offset = [x_stream area offset, 0] is an offset to sample the surface area of
the stream at L1 (sec:stream-surface-area).
The initial velocity is,
v_i = v_non-synchronous offset + v_thermal,
where v_thermal = [, 0] is the
thermal velocity of the particle in the stream
(sec:therm-veloc-stre-1).
v_asynchronous offset = [0, (-1)d_don, L1] is the velocity relative to the co-rotating
frame due to the non-synchronous rotation of the donor, and
d_don, L1 is the normalised distance from the centre of
the donor to L1 <cit.>. The
synchronicity changes the tangential velocity offset in two ways. It
determines the angular velocity offset, -1, and it affects the
distance, d_don, L1 (eq:critical_surface and
fig:lagrange_point_plot). We show the y-component of
v_non-synchronous offset as a function of
and in
fig:non_synchronous_rotation_schematic. Generally, with
higher mass-ratio, the lower the velocity offset due to asynchronous
rotation is. This is due to the increasingly smaller size of the donor
relative to the system. At low mass-ratio this effect is reversed, and
there is a clear asymmetry, with at low (∼ 0.2)
synchronicity the velocity offset is larger in absolute terms than at
high (∼ 1.8) synchronicity. This is due to that the L1
point moves outward for lower synchronicity which increases the
velocity offset.
We show the initial position and velocity components for an equal mass
binary (= 1) with a sub-synchronously rotating donor
(= 0.6) and a hot stream (= 0.1) in
fig:initial_position_velocity_schematic, where the thick
black and red arrows indicate the position and momentum vectors
respectively, and the thin dashed lines indicate their component
vectors.
§.§ Integration method
We calculate ballistic trajectories by solving the equations of motion
(equations <ref>
and <ref>) with an explicit 4th order
Runge-Kutta method using the dopri5 ODE solver
<cit.> from the Python
SciPy package
<cit.>. We use an adaptive
method that rejects the model and halves the time step if the relative
error on the Jacobi constant exceeds 10^-6. We either terminate
the integration based on a classification of the trajectory
(sec:classifying-and-averaging) or when the integrator fails
to conserve the Jacobi constant and the time step is shorter than
10^-20ω^-1.
§.§ Classifying and averaging trajectories
For each set of parameters [, , ] we
integrate N_A, stream trajectories, each with a position
offset x_stream area offset, i and a
weighting w_A_stream
(sec:mass-stream-particle,
eq:stream_density_distribution).
The trajectories are classified by their behaviour and
outcome. Particles accrete onto either the accretor or the donor, or
are lost from the system. Classification happens during integration,
and changes how the calculation is terminated.
* Accretion onto accretor: Classified by motion towards
the accretor, away from the donor, away from L1, into a deeper
potential than L1, and within the Roche lobe of the
accretor. Terminated at the moment the particle starts moving away
from the accretor.
* Accretion onto donor: Classified by motion towards the
donor, away from the accretor, away from L1, into a deeper potential
than L1 and within the Roche lobe of the donor. Terminated at the
moment of classification.
* Lost from system: Classified by distance from centre of
mass >3. Terminated on classification.
We show an example of different classifications in
fig:trajectory_classification_overview
Of the trajectories that are not terminated for numerical reasons, we
calculate weighted averages of their properties.
We determine the fraction, β_acc, of our trajectories
that accrete onto the accretor,
β_acc = ∑_i ∈𝒞δ_i w_A stream, i/∑_i ∈𝒞 w_A stream, i,
where w_A stream, i is the weight of the sampled
position offset along the mass stream cross-section, 𝒞 is
the set of classified trajectories, and
δ_i =
1 if trajectory_i classification is accretion onto accretor, and
0 otherwise.
We calculate the fraction that accretes back onto the donor,
β_don, in the same way as β_acc
(equations <ref>
and <ref>).
We calculate the fraction of trajectories that is lost from the system
or classified as
β_lost = 1 - β_acc -
β_don.
We denote the total weight of all trajectories that are successfully
categorised with,
w_successful = ∑_i ∈𝒞 w_A stream, i,
and the total weight of all those that fail or are rejected with,
w_fail = 1-w_successful,
which can occur when our integrator is not able to conserve the Jacobi
constant within the minimum time step threshold
(sec:integration-method).
With these weights and fractions we can quickly identify how
successful our calculations are for a given set of parameters
[, , ], and how the trajectories are
classified.
§.§ Intersecting orbits
At each coordinate in our parameter space we evolve a set of
trajectories sampled along the stream diameter
(sec:stream-surface-area). We treat each of these
trajectories independently, even though these trajectories can cross
either themselves or each-other.
To find intersecting trajectories we use the
SweepIntersectorLib[<https://github.com/prochitecture/sweep_intersector>],
which is a Python implementation of the Sweep line algorithm
of <cit.>.
Orbits that self-intersect are always flagged as such, but only
trajectories with an angle of intersection, , with
another trajectory larger than
= get flagged as intersecting
with others. While the exact threshold angle is not strongly
motivated, we argue that low-angle intersecting trajectories would
merge and be well approximated by their weighted average, high angles
of intersection could significantly change the outcome of both
trajectories.
fig:trajectory_classification_overview shows the different
types of intersection for a system with = 10^-1.2,
= 0.22, and = 10^-0.5, and
N_A, stream = 12 equally spaced sampled trajectories.
At each coordinate we record the weighted fraction of trajectories
that self-intersect, , as well as those that intersect
with other trajectories, , if their intersection
angle exceeds the threshold.
§.§ Radii, specific angular momenta and torques
When the mass stream misses the accretor it loops back around and form
an accretion disk. This disk forms at the circularisation radius,
defined as the radius where the specific angular momentum,
h_stream, min, acc, with respect to the accretor at the
moment of closest approach, , equals that of a circular
Keplerian orbit around the accretor with radius
= h_stream, min, acc^2/μ_acc.
The specific angular momentum of a particle with respect to the
accretor is,
h_acc = (x-x_acc)^2 + (y-y_acc)^2 + (x-x_don)ẏ-ẋ(y-y_don),
= (x-1+μ_acc)^2 + y^2 +
(x-1+μ_acc)ẏ-ẋy.
We calculate h_stream, min, acc by evaluating
eq:angmom_wrt_acc at the radius of closest approach.
While in our ballistic trajectory calculations we implicitly assume
that the stream will miss the accretor and will form an accretion disk
around the star, many interacting binaries actually transfer mass
through direct-impact accretion. When the stream collides with the
accretor, i.e. direct-impact accretion
r_stream < r_accretor, the specific angular
momentum of the stream (eq:angmom_wrt_acc) at that point is
different than at the point of closest approach during disk formation.
We calculate the specific angular momentum of the stream with respect
to the accretor as a function of the distance to the centre of the
accretor. This allows a more accurate determination of the specific
angular momentum accretion rate when the stream directly impacts the
accretor.
We record the (averaged) specific angular momentum of the stream at
fixed distances from the accretor, with a minimum distance of
d_stream min, a maximum distance of
d_stream max at N_radii equally spaced
radii, located at,
d_stream i = d_stream min + i ×(d_stream max - d_stream min)/N_radii.
Here d_stream i indicates the i-th radius from the
centre of the accretor in units of the Roche-lobe radius of the
accretor, at which we record the i-th specific angular momentum along
the stream h_stream i. We show a schematic example of
the locations at which we record the specific angular momentum of the
stream in fig:stream_interpolation_schematic.
§.§.§ Self-accretion torque
Accretion of (part of) the mass transfer stream back onto the donor
exerts a torque on the donor star. We calculate the specific angular
momentum of a particle at the moment of impact on the donor, with
respect to the donor,
h_don = (x-x_don)^2 + (y-y_don)^2 + (x-x_don)ẏ-ẋ(y-y_don),
= (x+μ_acc)^2 + y^2 +
(x+μ_acc)ẏ-ẋy.
We calculate the initial h_i, don and final
h_f, don specific angular momentum of a particle
accreting back onto the donor by evaluating eq:angmom_wrt_don
with the initial and final positions and velocities respectively, and
we use these specific angular momenta to calculate the total torque on
the donor due to self-accretion.
§.§ Properties of mass transfer in binary populations
To inform us of the ranges of , and we
should cover, we evolve a binary population with the rapid binary
population synthesis framework <cit.>, which is based on the
algorithm from <cit.>, and makes use of the single star
models of <cit.> and provides
analytical fits to their evolution as in
<cit.>.
Specifically relevant to this study are the tidal interactions between
binary stars. These are implemented as in
<cit.>, in which dynamical tides are
based on <cit.> and equilibrium tides are based
on <cit.>.
Our population contains binary systems with an initial primary mass
M_1, secondary mass M_2 and orbital period P, and we assign
weights to each system according to the distribution functions of
their birth properties of <cit.>.
M_1 is sampled logarithmically in the range 0.8 to 120 .
M_2 is sampled from a flat mass-ratio distribution between
0.1 M_⊙/M_1 and 1.
P is sampled from a logarithmically-spaced distribution of periods
between 1 day and 10^8 days.
We evolve
N_M_1× N_M_2× N_P = 80 × 80 × 80
binary systems sampled with the distributions described above at
near-solar metallicity (Z = 0.02).
During Roche-lobe overflow we record the mass transfer quantities
, and , and we weigh them by the
time spent transferring mass and the mass transferred,
W_time, i = p_i * dt [yr]
W_mass, i = p_i * dt Ṁ_don [M_⊙],
where W_time, i is the time-weighted probability,
W_mass, i is the mass-weighted probability, p_i is
the probability of the i-th system according to the distribution
functions of <cit.> dt is the time-step taken in
and Ṁ_don is the mass-transfer rate of
the donor.
Based on our results of our binary population, we determine the
parameter ranges for our ballistic interpolation calculations
(tab:interpolation_table_properties). We use these ranges to
span a hypercube of initial parameters for our ballistic calculations.
§ RESULTS
We present our results in the following sections. First, we show our
binary population which contain data on the properties of the mass
transfer in many systems. We then take these results and use them to
determine the ranges of the parameters in our trajectory
calculations. We then show our ballistic trajectory results for
“cold” (narrow and slow) and “hot” (wide and fast) streams.
§.§ Mass transfer in binary populations
With the results of our stellar population generated in
sec:expl-param-rang, we calculate the ranges of the
parameters of interest in a population of interacting binary
systems. Our results include the average time spent, and the average
mass transferred, of each system configuration.
fig:exploration_results_parameters shows the distributions of
the parameters of interest, weighted either by time spent transferring
mass or mass transferred. We normalise the area under each of the
curves to unity, and we define values <10^-5 as rare and indicate
them by a green horizontal line.
fig:exploration_results_parameters (a) shows the logarithmic
thermal-velocity, log_10(),
distributions. All systems have thermal-velocities between 10^-3.5
and 10^-0.5.
fig:exploration_results_parameters (b) shows the
synchronicity fraction, , distributions. These are
mostly between 0 and 2, with a peak around both 0 and 1 for
both the time spent transferring mass and mass-transferred
weights. While the time-spent distribution peaks at synchronous
rotation rates (= 1), the mass-transferred
distribution peaks at very sub-synchronous rotation rates
(∼ 0). There is a large tail of synchronicity
fractions from = 2 to
≃ 10, but their probability is low.
fig:exploration_results_parameters (c) shows the mass ratio,
log_10(), distribution. We see a single main range
between log_10() = -2 and 2 for both the time
spent and mass transferred weights. The data show that at small mass
ratios (< 1) hardly any time is spent transferring mass
(probabilities up to 10^-4), while at larger mass ratios
(> 1) the opposite is true. This is understood by the mass
ratio reversal during mass transfer and the transition from thermal
timescale mass transfer (high mass-transfer rate, short time) to
nuclear timescale mass transfer (low mass-transfer rate, long time).
We show the distributions of the logarithm of the ratio of the
dynamical timescale of the donor to the tidal timescale,
log_10(η_static) in
fig:exploration_results_alpha. We indicate equal-valued
timescales, log_10(η_static) = 0
with a red-dashed vertical line. The area on the right of this line
indicates that the static-tide approximation is justified, and vice
versa. The numbers in the legend indicate the total fraction for
either weights with
log_10(η_static) < 0. The data
show a broad range of
log_10(η_static), and clearly
show that in terms of time-spent transferring mass, the static
approximation is overall valid (less than 0.1 per cent below
log_10(η_static) = 0). This is
not always the case for the mass-transferred, because a significant
fraction (13 per cent) of all mass transferred occurs when the
static-tide approximation is invalid.
We show the normalised distribution of
log_10(η_static) as a function
of in
fig:exploration_results_alpha_vs_synchronicity, where in
fig:exploration_results_alpha_vs_synchronicity (a) we show
the distribution weighted by mass-transferred, and in
fig:exploration_results_alpha_vs_synchronicity (b) we show
the data in terms of time-spent transferring mass. We indicate 6
sections, separated by red-dotted lines. Section 1 indicates
super-synchronous (> 1.025) systems where the potential is
approximately static
(log_10(η_static) >= 0), section
2 indicates near-synchronous systems
(0.975 <= <= 1.025) with a static potential
(log_10(η_static) >= 0) and
section 3 indicates sub-synchronous systems (0.975 <)
with a static potential
(log_10(η_static) >= 0). Section
4 indicates sub-synchronous systems (0.975 <) where the
static approximation is not valid
(log_10(η_static) < 0, i.e. with
a dynamic potential), section 5 indicates near-synchronous
systems (0.975 <= <= 1.025) with a dynamic potential
(log_10(η_static) < 0) and
section 6 indicates super-synchronous systems (> 1.025)
with a dynamic potential
(log_10(η_static) < 0). The
range of in the near-synchronous regions is determined by the
bin-width in our simulations.
fig:exploration_results_alpha_vs_synchronicity (a) shows that
the transferred mass is mostly transferred in three sections. Only
9.8 per cent of the systems normalised by mass-transferred are
synchronous and are well approximated by the static potential (section
2). The large majority (77.5 per cent) of transferred mass
takes place in systems with a sub-synchronous donor that still
responds rapidly enough to regard the potential as static (section
3). Most of the remaining systems (12.6 per cent) have donors
that rotate sub-synchronously for which the static potential
approximation does not hold (section 4). The rest of the
sections cover less than 0.08 per cent of all transferred mass,
which indicates that super-synchronous rotation does not occur much in
field binaries (<0.07 per cent), and especially not in cases where
the static potential approximation breaks down.
fig:exploration_results_alpha_vs_synchronicity (b) shows that
the time-spent transferring mass mostly is spent in just two sections,
section 2 and 3. Between these two sections, the
synchronous case where the static potential approximation holds
(section 2) covers 37.6 per cent of all time-spent
transferring mass. The majority, thus, is spent where systems have
donors that rotate sub-synchronously but effectively experience a
static potential. The contribution of the other regions is negligible
(<0.04), indicating that like in the mass-transferred case
super-synchronous rotation is not common in field binaries, but also
that not much time is spent in the case where the donor effectively
experiences dynamical tides.
With our results shown in fig:exploration_results_parameters
and fig:exploration_results_alpha_vs_synchronicity we
determine the parameter ranges for the trajectory simulations.
* For the thermal velocity, log_10(), we
consider the range between -3.5 and -0.5 for our trajectory
calculations.
* For the synchronicity factor, , we consider the range
between 0 and 2 for our trajectory calculations. A small
fraction of systems has > 2, they are however clearly less
frequent.
* For the mass ratio, , we use the range between -2 and
2 for our trajectory calculations.
These values are listed in
tab:interpolation_table_properties.
The above results indicate that sub-synchronous mass-transfer is
common, both for the time-spent (>60 per cent) and for the
mass-transferred (90 per cent). This further motivates the remainder
of this study.
§.§ Ballistic trajectory properties
In this section we show our results of the ballistic trajectory
calculations. While our results span a large parameter space, we
choose to highlight the two extreme cases, with the results for cold
and narrow streams (= 10^-3 and
≈ 10^-4-10^-3) in sec:cold-narrow
and hot and wide streams (= 10^-0.5 and
≈ 0.1-0.4) in sec:traj-prop-hot.
Before looking at the results let us highlight several effects that
are relevant to the evolution of the trajectories.
In sub-synchronous systems (< 1) L1 moves outward relative to
the synchronous case, the velocity offset due to asynchronous rotation
at L1 is downward (v_non-synchronous offset is
negative), the Coriolis force for downward motion leads to a rightward
acceleration (a_Coriolis, y is positive), and at the
moment of release the particle is located within the Roche lobe of the
accretor.
In super-synchronous systems (> 1) L1 moves inward relative
to the synchronous case, the velocity offset due to asynchronous
rotation at L1 is upward (v_non-synchronous offset is
positive), the Coriolis force for upward motion leads to a leftward
acceleration (a_Coriolis, y is negative) and at the
moment of release the particle is located within the Roche lobe of the
donor.
In low mass ratio systems (< 1) the velocity offset due to
asynchronous rotation is larger relative to equal mass-ratio systems
due to the large size of the Roche-lobe of the donor and the velocity
is even higher for sub-synchronous rotation as L1 moves outward.
In high mass ratio systems (> 1) the velocity offset due to
asynchronous rotation is smaller relative to the equal mass-ratio
systems due to the small size of the Roche-lobe of the donor.
These effects are visualised and quantified in Figures
<ref>,
<ref> and
<ref>, and eq:equations_of_motion_x.
§.§.§ Cold and narrow streams
We show our cold and narrow ballistic integrations,
= 10^-3, in the ranges of mass ratio, , and
synchronicity factor, , described in
tab:interpolation_table_properties. From
fig:combined_area_plot we know that the stream diameter is
small, ≈ 10^-4-10^-3, so all the
trajectories sampled along the stream effectively have the same
initial position. From fig:non_synchronous_rotation_schematic
we know that for asynchronous systems (≠ 1) at low mass
ratios < 1 the initial radial velocity is low compared to the
tangential asynchronous velocity offset,
v_asynchronous offset, which indicates that results in
that part of the parameter space will deviate most from the
synchronous case explored by <cit.>.
In fig:rmin_low_v we show the radii of closest approach,
, of particles that accrete onto the accretor as a function of
mass ratio, (abscissa), and donor synchronicity,
(colour scale). The triangles indicate the orientation of the
particle, where the upward triangle indicates prograde (same direction
as the binary orbit) orientation and the downward triangle indicates
retrograde (opposite direction). The red diamonds are from
<cit.>, and the blue dashed line
indicates the prescription of
<cit.>.
The radii of closest approach in synchronously-rotating
= 1 donor systems match closely to the results of
<cit.>.
Overall, in the range that covers the parameters of
<cit.> we find a good match,
confirming that our method works as it should, given the assumptions
and approach.
Our data show the super-synchronous donors only accrete onto the
accretor at high mass ratios (> 10 and > 1.5), with a
decrease in minimum required for accretion onto the donor with
a decrease in . At high mass-ratio the donor is not able to
exert enough force to turn the stream back onto itself even though the
particle is released within its Roche-Lobe, due to its low mass.
With sub-synchronous donors we find an increase in the minimum
mass-ratio that accretes onto the accretor, with decreasing
. Moreover, given a synchronicity factor, , the radius
of closest approach decreases with decreasing mass ratio, .
Systems with a low mass ratio and a low synchronicity factor
(< 1 and < 0.4) experience a high negative velocity
offset due to asynchronous rotation and, even though they initially
start in the Roche lobe of the accretor, they experience an
acceleration towards the donor because of the Coriolis force, which is
strong enough to steer the trajectory onto the donor.
Generally, in high mass-ratio systems, the effect of asynchronous
rotation on the radius of closest approach is small, with a spread of
only a factor of 2 at ∼ 30. This is because the velocity
offset due to the asynchronous rotation for systems with mass-ratio
≥ 30 is generally low
(|v_asynchronous offset| < 0.2,
fig:non_synchronous_rotation_schematic), so the trajectories
do not differ much from the synchronous case.
In fig:rcirc_over_rmin we show the ratio of circularisation
radius to radius of closest approach, /, as a function of
mass ratio, , and synchronicity factor, . This data is
a measure of the specific angular momentum at the radius of closest
approach and how much it differs from that of a circular orbit at the
radius of closest approach. Moreover, this data is used to calculate
the radius at which an accretion disk forms. The red diamonds are from
<cit.> and the blue-dashed
horizontal line is from
<cit.>.
At high mass-ratios (> 10), we see a general decrease of the
r_circ/r_min with increasing mass ratio
, regardless of the synchronicity factor, with a spread of at
most 0.2. This indicates that the specific angular momentum at the
radius of closest approach tends to that of a circular orbit at the
radius of closest approach, and that the asynchronous rotation of the
donor does not affect this quantity strongly either.
At mass-ratios < 1 the trajectories of most asynchronous
donors accrete onto the accretor. All the trajectories have a ratio of
radii between 1.7 and 2.0, indicating that the stream carries much
more specific angular momentum at the radius of closest approach than
a circular orbit would. Because of its low mass, the torque exerted by
the accretor is insufficient to circularise the stream.
Overall, the ratio is between 1.3 and 2, indicating that the
stream always carries more specific angular momentum than a circular
orbit at the radius of closest approach would. Moreover, the commonly
used constant ratio 1.7 used by
<cit.> is up to 30 per
cent off.
In fig:self_accretion_specific_angular_momentum_factor we
show the fractional difference between the final
(h_f, don) and initial (h_i, don) specific
angular momenta (ordinate) of particles that accrete back onto the
donor as a function of mass ratio, (abscissa), and
synchronicity fraction, . The data show two distinct regions.
The trajectories from sub-synchronous donors (≤ 0.7) show
an increasingly larger final specific angular momentum
h_f, don compared to the initial specific angular
momentum, h_i, don, of the stream for decreasing
synchronicity factor, . Moreover, the lower , the
larger the range in mass-ratios, , for which the stream
accretes onto the donor. This is because a larger deviation from
synchronism introduces a larger velocity offset, which requires an
increasingly massive accretor to completely turn the stream towards
itself. These trajectories all exert a positive torque on the donor
that leads to the donor becoming more synchronous.
Trajectories from super-synchronous donors show a decrease in specific
angular momentum relative to their initial specific angular momentum,
with a decrease in specific angular momentum relative to the initial
angular momentum as increases. This is because of a decrease
in angle of incidence with the donor with increasing asynchronicity
for super-synchronous donors, caused by a combination of a lower
velocity offset and an acceleration towards the donor, and vice versa
for sub-synchronous donors. Trajectories that accrete onto
super-synchronous donors all exert a negative torque that again leads
to the donor becoming more synchronous.
For both the super-synchronous (negative torque) and the
sub-synchronous (positive torque) torque self-accretion, the magnitude
of the difference between the initial and final specific angular
momenta increases, for a given synchronicity factor, with increasing
mass ratio . At higher mass-ratios the trajectory is affected
more, due to the stronger gravitational effect of the accretor. This
increasingly affects the final angular momentum of the stream, which
leads to the increasing difference. In the low mass-ratio systems, the
stream angular momentum is hardly affected, and thus the difference
remains small (e.g. at = 0.01,
h_f, don/h_i, don-1 > -10^-1 for
sub-synchronous donors, and
h_f, don/h_i, don-1 < 5×10^-1 for
super-synchronous donors).
We show the fractions of each classification as a function of mass
ratio, (abscissa), and synchronicity factor,
(ordinate, sec:classifying-and-averaging,
eq:fraction_accretion_accretor), in
fig:classification_fractions. fig:classification_fractions
(a) shows the fraction of all trajectories accreting onto the
accretor, fig:classification_fractions (b) shows those
accreting onto the donor and fig:classification_fractions (c)
shows those that are lost from the system. The colour-scale indicates
a non-zero fraction, where a white indicates a fraction of zero. The
red lines indicate the fraction of all trajectories that failed to
evolve correctly (sec:integration-method and
eq:all_fail).
The data in fig:classification_fractions (a) show that, at
low mass-ratio, (< 0.1), only the near-synchronous donors
accrete onto the accretor. The region of synchronicity factor,
, that corresponds to accretion onto the accretor increases
both to sub- and super-synchronous donors with increasing mass ratio
. This is due to the decrease in velocity offset due to
asynchronous rotation with increasing
(fig:non_synchronous_rotation_schematic). This reduces the
effect of asynchronicity is reduced and the makes the trajectories
behave like ones from synchronous systems. The asymmetry in the shape
of the fraction accreted onto the accretor is caused by the Coriolis
force, which accelerates the particle towards the accretor for
sub-synchronous donors and away for super-synchronous donors.
The data in fig:classification_fractions (b) show an exact
inversion of the data in fig:classification_fractions (a),
and fig:classification_fractions (c) shows that for the low
thermal-velocity (cold) there are no trajectories that escape the
system.
The transition between each region is sharp, caused by the narrow
stream associated with the low thermal-velocity (cold), which
indicates that for every classification at a given coordinate either
none (white) of the trajectories or all (yellow) of the trajectories
are classified as such. Moreover, we find no failing systems for our
cold and narrow trajectories.
fig:intersection_low_v shows the fractions of trajectory
intersections (sec:intersecting-orbits) as a function of
and , for low thermal-velocity (cold,
= 10^-3) streams. The red contours show the fraction of
self-intersecting trajectories, the blue contours show the fraction of
intersection with other trajectories. The dashed line indicates a
weighted fraction of at least 0.1 of all the trajectories, a dotted
line indicates a weighted fraction of at least 0.5 and the solid line
indicates a weighted fraction of at least 0.9 of all trajectories.
We find that self-intersecting orbits occur at the edges of the
transition regions between accretion onto the accretor and accretion
onto the donor (fig:classification_fractions). The fraction
is always high, since the stream is itself so narrow that the
trajectories stay bundled and follow approximately the same path.
Intersection with other trajectories, with angles of incidence above
the threshold , occurs in the same narrow region of
(, ) parameter space as the self-intersecting
orbits. This is because the stream is so narrow that the trajectories
effectively follow the same path as each other.
In fig:rmin_low_v and fig:rcirc_over_rmin we focus
on properties of the stream at its radius of closest approach to the
accretor. In many situations, though, the radius of the accretor
exceeds the radius of closest approach and the stream directly impacts
the accretor. In that case, the stream has less travel time through
the potential and experiences less torque by the binary system, which
affects the specific angular momentum of the stream upon impact with
the accretor.
We show the evolution of the specific angular momentum of the
mass-transfer stream as a function of its distance to the accretor and
the mass ratio for systems with synchronously rotating donors
(= 1) in fig:stream_interpolation_low_v. The color
indicates the specific angular momentum of the stream in units of that
of the specific angular momentum at the radius of closest
approach. The orange lines show 5 equally spaced lines where this
specific angular momentum is constant.
For all mass ratios the specific angular momentum of the stream starts
out higher than what the specific angular momentum at the radius of
closest approach is. For systems with high mass ratios the difference
between the initial specific angular momentum and that at is
minor (a few percent), but this difference increases with decreasing
mass ratio (up to ten percent).
We note that the qualitative behaviour of the stream systems with a
different synchronicity factor, , and thermal velocity,
, is not necessarily the same as described above.
§.§.§ Hot and wide streams
In this section we show the trajectory properties of systems with a
hot and wide stream (= 10^-0.5 and
≈ 0.1-0.4). Whereas in the low
thermal-velocity (cold) regime the stream area is negligible, here the
stream area is sufficiently large as to cause a relevant offset
between the initial positions of the particles. Moreover, the high
thermal-velocity (hot) provides a large initial radial velocity
towards the accretor, and the Coriolis force subsequently provides a
large downward (negative y-direction) acceleration on the particles.
We show the radii of closest approach in our high thermal-velocity
(hot) calculations in fig:rmin_high_v.
Overall, we again find a small spread of radii (∼ 0.3-0.5)
at large mass-ratios = 100, but we now see a much larger
spread (∼ 0.01-0.8) at low mass ratios (∼
0.1). Notably, a wider range (= 0.1-2.0) of initial
asynchronicities lead to the accretion onto the accretor. This is due
to the larger (≈ 0.32) initial radial velocity that
makes it harder to deflect the stream.
Accretion onto the donor now only occurs for systems with a low
mass-ratio (< 0.1, significantly lower than in the cold-stream
case), either with < 0.4 or with > 1.5
(fig:classification_fractions_high_v).
Sub-synchronous donors show a general increase of with
decreasing synchronicity factor. This is due to the initially negative
transversal velocity from the sub-synchronous rotation directing the
stream further away from the accretor. This eventually leads to a
fraction of the stream escaping from the system, but for very
sub-synchronous rotating donors many trajectories self-intersect.
At low mass-ratios, super-synchronous donors show a general decrease
of with increasing synchronicity factor, but this behaviour
turns around for highly super-synchronous donors (> 1.7) at
low mass ratios (< 0.1). This is because part of the stream
for these systems starts accreting onto the donor, and the
trajectories that do still accrete onto the accretor on average have a
large radius of closest approach. This region of parameter space
contains many (self-)intersecting trajectories
(fig:intersection_high_v), and since we do not treat
intersecting orbits differently, this indicates that this region
requires a more sophisticated approach than our current one.
We show the ratio of / in our high thermal-velocity (hot)
calculations in fig:rcirc_over_rmin_high_v.
At high mass-ratios (> 1), while as a function of mass ratio
the results are similar to the low thermal-velocity (cold) case,
i.e. with higher the mass ratio a lower /, the behaviour
as a function of synchronicity is now reversed. The lower the
synchronicity, the lower the ratio /, indicating that the
trajectories on averages at the radius of closest approach are similar
to circular orbits at that radius, and vice versa.
At small mass-ratios (< 1) the behaviour is similar as above,
but from ≤ 0.4 the sub-synchronous systems get an
increasingly high ratio / with decreasing mass ratio
. This coincides with regions of the parameter space where part
of the stream either escapes from the system, or starts accreting onto
the donor. The remaining trajectories that barely do not escape often
fall back into the Roche-lobe of the accretor near radially, or they
find their radius of closest approach very early in their
trajectory. For these trajectories, the first radius of closest
approach is potentially not suitable to determine the angular momentum
of the ring that would form when the stream circles around the
accretor and hits itself.
We show the ratio of final and initial specific angular momenta of
self-accreting material in our high thermal-velocity (hot)
calculations in
fig:self_accretion_specific_angular_momentum_factor_high_v
While again there are two distinct regions of positive and negative
torque of self-accreting material, both regions are smaller and
require a higher degree of asynchronicity (i.e. > 1.5) and/or
a lower mass ratio (< 0.1) to self-accrete. Systems with
super-synchronous that self-accrete tend to experience a higher torque
for a given mass ratio, e.g. for = 0.01,
h_don, f/h_don, i-1 > -10^-1 for
= 10^-0.5 compared to
h_don, f/h_don, i-1 = [-10^-2,
-10^-1]. The angles of incidence of these trajectories with the
donor are much larger, nearing perpendicular to its surface.
In fig:classification_fractions_high_v we show the fractions
of trajectories in each classification for the hot stream
calculations.
fig:classification_fractions_high_v (a) shows that compared
to our low thermal-velocity (cold) results
(fig:classification_fractions), a larger fraction of mass
ratios and synchronicity factors accrete onto the accretor,
e.g. systems with 0.1 < < 10 and > 1.5 or < 0.5 now
accrete onto the accretor instead of onto the donor in the low
thermal-velocity case (= 10^-3.0). This is mainly
attributed to the larger initial radial velocity, which gives the
particles more momentum to start with and make it more difficult to
change the course of their trajectories. This, in turn, leads to a
smaller region of the parameter space in and that
accretes back onto the donor.
fig:classification_fractions_high_v (c) shows the fraction of
trajectories that escapes as a function of and . While
at low (= 10^-3.0) thermal-velocity (cold) there is no
trajectory that escapes, the high (= 10^-0.5)
thermal-velocity allows trajectories to pass the accretor and escape
through the Lagrange point behind the accretor (at
x > x_acc). This primarily occurs in sub-synchronous
systems, again due to the Coriolis force accelerating the particle
towards positive x.
Overall, the data in fig:classification_fractions_high_v (a)
and (b) show that instead of the sharp transition between accretion
onto accretor and self-accretion, there is a much more gradual
transition between the regions where the fractions transition from 0
to 1 over a larger range of parameters. This is because the high
thermal-velocity leads to a wide stream, i.e. a wider range of initial
positions around L1 for our trajectories for a given system. In
systems with e.g. = 0.1 and = 1.9, about half of the
trajectories that make up the stream accrete onto the donor, and half
accrete onto the accretor.
Moreover, some trajectories fail to stay accurate within the given
minimum time step, but the total fraction of the failing systems is
negligible, and they only occur in small regions.
fig:intersection_high_v shows the intersection fractions for
high thermal-velocity (hot, ≈ 0.32) mass
transfer. The structure of the figure is the same as in
fig:intersection_low_v.
We find that the self-intersecting orbits again occur on the edges of
the transition regions between accretion onto the donor and accretion
onto the accretor. The fraction itself it not always high, due to the
stream being wider and parts the transition region is more gradual
(i.e. for a wider range in , , parts of the stream can
accrete onto different regions). For sub-synchronous rotation
(< 0.75) the region of parameter space where self-accretion
occurs is narrow and is more confined to the transition region than
self-accretion in super-synchronously (> 1.75) rotating
systems. Super-synchronous systems with high-thermal velocity streams
have very wide streams, but the asynchronous velocity offset is lower
than in the equivalent sub-synchronous configurations
(< 1.75,
fig:non_synchronous_rotation_schematic). This leads to
trajectories in a larger region in the parameter space
self-intersecting.
Intersection with other trajectories again coincides with regions of
self-intersection, where for sub-synchronous systems the regions
overlap strongly but for super-synchronous the region where
trajectories intersect with others extends to a larger part of the
parameter space (< 0.5 and > 1.25). The increase in
stream diameter in this leads to the initial conditions of each
trajectory to be sufficiently different to cross at high angles of
incidence (>).
Overall, the regions of self-intersection and other-intersection are
confined to regions of low mass-ratios (< 0.5), due to the
higher radial velocity that occurs at high-thermal velocity, which
gives the stream more momentum and makes it harder to deflect or
rotate. Only for low mass-ratios is the donor massive enough to turn
the trajectories and lead to (self-)intersections.
§ DISCUSSION
We use binary population synthesis to evolve populations of binary
systems and record their properties during mass transfer. We do this
to find the ranges of the mass ratios of the accretor, ,
synchronicity factors of the donor, , and thermal velocities
of the stream, that we should cover in our ballistic
stream trajectory calculations. At the same time we use these results
as a motivation for this study. Most notably, we find that mass
transfer takes place with non-synchronous donors for a significant
fraction of either mass transferred (≈ 90 per cent), as well
as time spent transferring mass (≈ 60 per cent,
fig:exploration_results_alpha_vs_synchronicity).
We find that the approximation of static tides does not always hold,
especially the fraction of mass transferred while the static
approximation fails is significant (≈ 10 per cent). This
indicates that mass transfer in those systems occurs in a
time-dependent potential, the effects of which are not captured by our
modelling approach and these systems likely require detailed stellar
evolution models and time-averaging to model the mass transfer
correctly.
We note that, while the results shown in
sec:mass-transfer-binary indicate the extent of the
parameters relevant to this study, they should be used just for
that. We currently calculate the population statistics for a starburst
population at a specific metallicity, and we do not convolve with any
star formation rate. This means that our population results are not a
directly observable quantity, even if the assumption of a single
metallicity is not entirely wrong for populations of dwarfs in the
solar neighbourhood
<cit.>. Moreover, our results
depend on the details of the population synthesis
calculations. Changes in, e.g., tidal interaction physics
<cit.> or birth distributions
<cit.> of the binary
components will change our results, although the extent to which is
not clear.
Recently <cit.> performed calculations with a similar
approach to ours. While they don't supply a data-release, the
behaviour of their stream models is described in some cases. They find
that, in all cases of self-accretion, the donor experiences a positive
torque, effectively spinning up the donor and removing angular
momentum from the orbit. This agrees with the results of
<cit.>, as well as with
those of <cit.>. The focus of all
these studies is on sub-synchronous donors. We find that
self-accretion onto super-synchronous donors lead to a spin-down of
the donor.
Our results imply that if a donor rotates asynchronously and
self-accretes, this self-accretion always works to synchronise the
donor even if it rotates super-synchronously.
We capture the effects of a large mass transfer stream cross-section
by simulating a set of trajectories with initial position offsets
along the stream. We treat these trajectories as individual, and we do
not include any interaction between these trajectories. In some cases,
however, the trajectories along the mass stream intersect at large
angles with other trajectories
<cit.>. Realistically, these would be swept
up by parts of the stream with a higher density and momentum
<cit.>. We track whether trajectories
intersect with either themselves or with others
(sec:intersecting-orbits) and we find self-intersection and
intersection with other trajectories (at angles larger than the
threshold) occurs primarily in the transition regions between
accretion onto the accretor and accretion onto the donor
(Figures <ref>
and <ref>). Especially in the high
thermal-velocity (hot) stream super-synchronous cases we find that
where a high fraction of intersection with other trajectories takes
place (> 1.25 and < 0.5) extends to a larger part of
the parameter space than the region where self-intersection occurs
(> 1.75 and < 0.25,
fig:intersection_high_v). The very wide stream causes the
particles along it to have a large spread in initial conditions and to
follow significantly varying trajectories.
We currently do not post-process any of these trajectories to alter
their outcome or to reject them based on intersection. The regions
where a high degree of (self-)intersection occurs likely require an
approach that is more sophisticated than approximating the stream by a
series of non-interacting ballistic trajectories.
Our ballistic approach imposes some assumptions on the starting
conditions of the particle, especially in asynchronous rotating
donors.
We take the transversal velocity offset due to asynchronous rotation
v_non-synchronous offset to scale linearly
with the synchronicity factor. <cit.>
critiques this approach, and argues that this axisymmetric velocity
assumption is not valid <cit.>,
and that the problem requires a hydrodynamical analysis.
This is based on two studies that look at the gas dynamics of material
at L1 in non-synchronous donors using polytropic models for the
radiative <cit.> and
convective <cit.>
stars, specifically the shape of the flow field at L1.
They both find that in the linearised and low-asynchronicity case the
velocity field tends to zero as it approaches L1, and hence and flow
towards L1 slows down and tend to zero before flowing through L1 and
increasing. This is in contrast with our assumption of a transverse
velocity component linearly dependent on the non-synchronicity factor
. A lower velocity offset with the same asynchronous rotation
of the donor leads to stream properties that are more like the
synchronous case.
Because of the initial supersonic velocity relative to the L1 point
(fig:non_synchronous_rotation_schematic) the slow-down can be
accompanied by shocks <cit.>. The heating
of the shock dissipation could change the initial properties of the
stream (e.g. increase the local temperature at L1), and could be
observable as an excess luminosity around L1. With observations of
mass-transferring systems it might be possible to discern whether this
slow-down to L1 actually occurs, and whether mass-stream trajectories
behave like those in synchronous systems even for asynchronous donors.
The aim of this paper was to include the effects of non-synchronous
rotation of the donor on the particles in the mass transfer stream in
the ballistic approach, where we treat the accretor as a point
particle with no physical size. Our method, however, is suitable for
extensions like treating direct impact accretion onto the accretor and
adding properties of the particle during its flight to the
interpolation dataset, and the inclusion of additional physical
effects like post-Newtonian potentials for the accretor
<cit.>, the effects of kinematic
acceleration <cit.> or those of
irradiation by the secondary
<cit.> on the critical surface of
the donor.
§ CONCLUSIONS
Motivated by the lack of publicly available data of stream properties
in systems with non-synchronously rotating donor stars, we hereby
present our results of ballistic trajectory calculations. We calculate
ballistic trajectories with varying mass ratio , synchronicity
factor and initial thermal-velocity , and we
assume the accretor radius is infinitely small. We make use of binary
population synthesis to inform us of the ranges of the initial
parameters of the ballistic calculations and to provide further
motivation for the importance of this study and the need for a
publicly accessible data set on ballistic trajectories for
non-synchronous donors.
The main results of our study are summarised below.
* Our binary population calculations with metallicity Z=0.02
indicate that a large fraction of binary systems transfer mass
sub-synchronously, but they transfer more mass (90.14 per cent)
sub-synchronously than spend time doing so (62.44). Only a very
low fraction of systems transfers mass super synchronously (<0.07
per cent mass transferred and <0.02 per cent time spent
transferring mass). Moreover, while only a small fraction of time is
spent during which the static tide approximation breaks down, a
non-negligible fraction of mass (12.64 per cent) is transferred
when the donor experiences a dynamic potential. This does, however,
mean that the static potential approximation is valid for the
majority (87.36 per cent) of mass transferred with
sub-synchronously rotating donors.
* Our ballistic trajectory calculations indicate that at low
initial thermal-velocity (cold, = 10^-3.0) there are
clear distinctions between accretion onto the accretor and accretion
onto the donor within the parameter space of and ,
and no trajectories escape from the system. The minimum radius of
approach can be as low as 10^-3, indicating a near head-on
stream. A larger region in the (, ), parameter space
leads to accretion onto the donor for super-synchronous donors
(> 1 and < 100) than for sub-synchronous donors
(< 0.75 and < 5), but the change in specific
angular momentum of the self-accreting stream is overall lower for
super-synchronous donors. Both for sub-synchronous as for
super-synchronous donors the self-accretion always works
synchronising. We find that intersecting trajectories only occur at
the edge of the transition region between accretion onto the
accretor and accretion onto the donor and that the self-intersection
regions overlap with that of intersection with other trajectories.
* High initial thermal-velocities (hot, = 10^-0.5)
correspond to a wider mass stream, and lead to a less sharp
transition between the regions of accretion onto the donor and
accretion onto the accretor. Fewer configurations of and
, i.e. > 1.5 and < 0.2 for
super-synchronous donors and < 0.75 and < 0.1 for
sub-synchronous donors, lead to accretion onto donor because of the
larger initial radial velocity of the stream that makes it more
difficult to deflect the stream. We find some trajectories can
escape the system through the Lagrange point behind the accretor
(x > x_acc), especially in systems with a
sub-synchronous donor. Intersecting trajectories again occur at the
edge of the transition region, but for super-synchronous donors the
intersection with other trajectories occurs for a larger part of the
parameter space than self-intersecting orbits, i.e. > 1.25
and < 0.5 for intersection with other trajectories and
> 1.7 and < 0.1 for self-intersection.
Our results are useful for orbital evolution and mass transfer
calculations, including determining the formation and properties of
accretion disks. They can be used in stellar evolution and population
synthesis code, and they are available online upon publication of the
paper.
§ ACKNOWLEDGEMENTS
DDH thanks the UKRI and the University of Surrey for the funding grant
H120341A, and thanks Arman Aryaeipour, Dominika Hubovà, Giovanni
Mirouh, Ondřej Pejcha, Natalie Rees and Mathieu Renzo for useful
discussions. RGI thanks STFC for funding grants
https://gtr.ukri.org/projects?ref=ST
and
https://gtr.ukri.org/projects?ref=ST/L003910/2ST/L003910/2.
§ DATA AVAILABILITY
We make our ballistic trajectory integration code, as well as the
interpolation tables for the stream properties and the exploration
data generated through population synthesis available on
https://doi.org/10.5281/zenodo.7007591https://doi.org/10.5281/zenodo.7007591
upon publication.
mnras
§ DESCRIPTION OF OUTPUT DATASETS
The ballistics stream trajectory summary datasets contain the
parameters described in tab:description_table, along with
meta-data regarding indices and global configurations. These datasets
can be interpolated on and implemented in other binary stellar
evolution codes to include the effect explored in our paper and the
subsequent changes in the mass transfer properties like the torque on
the orbit, the fraction of self accretion.
§ LAGRANGE POINTS AS A FUNCTION OF SYNCHRONICITY
With eq:critical_surface we calculate the first three Lagrange
points, the donor for the synchronicity factors and mass
ratios . We calculate these in the non-inertial reference frame
centred on the donor and transform this to the non-inertial reference
frame centred on the centre of mass of the system
(sec:roche-potent-reduc). In fig:lagrange_point_plot
we show the x-coordinate of the first three Lagrange points for both
of these frames.
|
http://arxiv.org/abs/2307.04054v1 | 20230708222123 | Deep Unsupervised Learning Using Spike-Timing-Dependent Plasticity | [
"Sen Lu",
"Abhronil Sengupta"
] | cs.CV | [
"cs.CV"
] |
Deep Unsupervised Learning Using Spike-Timing-Dependent Plasticity
Sen Lu, Abhronil Sengupta
School of Electrical Engineering and Computer Science
The Pennsylvania State University
University Park, PA 16802, USA
Email: {senlu, sengupta}@psu.edu
============================================================================================================================================================================================
Spike-Timing-Dependent Plasticity (STDP) is an unsupervised learning mechanism for Spiking Neural Networks (SNNs) that has received significant attention from the neuromorphic hardware community. However, scaling such local learning techniques to deeper networks and large-scale tasks has remained elusive. In this work, we investigate a Deep-STDP framework where a convolutional network is trained in tandem with pseudo-labels generated by the STDP clustering process on the network outputs. We achieve 24.56% higher accuracy and 3.5× faster convergence speed at iso-accuracy on a 10-class subset of the Tiny ImageNet dataset in contrast to a k-means clustering approach.
Unsupervised Learning, Spiking Neural Networks, Spike-Timing-Dependent Plasticity
§ INTRODUCTION
With high-quality AI applications permeating our society and daily lives, unsupervised learning is gaining increased attention as the cost of procuring labeled data has been skyrocketing concurrently. The ever-more data-hungry machine learning models usually require a humongous amount of labeled data, sometimes requiring expert knowledge, to achieve state-of-the-art performance today. Since manual annotation requires a huge investment of resources, unsupervised learning is naturally emerging as the best alternative.
One of the most prominent unsupervised learning methods is clustering. The main concept of clustering is to compress the input data (like images in the case of computer vision problems) into lower dimensions such that the low-dimensional features can be clustered into separable groups. The efficiency of the sample clustering process improves with better representations of the compressed features. Since the quality of features depends only on the dimension reduction algorithm, the design and choice of the clustering method are critical to the success of unsupervised learning. However, most real-world tasks are not easily represented as separable low-dimensional points. Earlier attempts include classical PCA reduction before clustering <cit.>, while others attempt to augment more features with “bags of features" <cit.>; but mostly constrained to smaller tasks. Recent works like DeepCluster have explored scaling of unsupervised learning approaches by incorporating the k-means clustering algorithm with a standard Convolutional Neural Network (CNN) architecture that can learn complex datasets such as ImageNet without any labels <cit.>. Some works have also proven that pre-training the network, even unsupervised, is beneficial to building the final model in terms of accuracy and convergence speed <cit.>.
The focus of this article, however, is on scaling unsupervised learning approaches in a relatively nascent, bio-plausible category of neural architectures - Spiking Neural Networks (SNNs). SNNs have been gaining momentum for empowering the next generation of edge intelligence platforms due to their significant power, energy, and latency advantages over conventional machine learning models <cit.>. One of the traditional mechanisms of training SNNs is through Spike-Timing-Dependent Plasticity (STDP) where the model weights are updated locally based on firing patterns of connecting neurons inspired by biological measurements <cit.>. STDP based learning rules have been lucrative for the neuromorphic hardware community where various emerging nanoelectronic devices have been demonstrated to mimic STDP based learning rules through their intrinsic physics, thereby leading to compact and resource-efficient on-chip learning platforms <cit.>. Recent works have also demonstrated that unsupervised STDP can serve as an energy-efficient hardware alternative to conventional clustering algorithms <cit.>.
However, scaling STDP trained SNNs to deeper networks and complex tasks has remained a daunting task. Leveraging insights from hybrid approaches to unsupervised deep learning like DeepCluster <cit.>, we aim to address this missing gap to enable deep unsupervised learning for SNNs. Further, while techniques like DeepCluster have shown promise to enable unsupervised learning at scale, the impact of the choice of the clustering method on the learning capability and computational requirements remains unexplored.
The main contributions of the paper can therefore be summarized as follows:
(i) We propose a hybrid SNN-compatible unsupervised training approach for deep convolutional networks and demonstrate its performance on complex recognition tasks going beyond toy datasets like MNIST.
(ii) We demonstrate the efficacy of STDP enabled deep clustering of visual features over state-of-the-art k-means clustering approach and provide justification through empirical analysis by using statistical tools, namely Fisher Information Matrix Trace, to prove that STDP learns faster and more accurately.
(iii) We also provide preliminary computational cost estimate comparisons of the STDP enabled Deep Clustering framework against conventional clustering methods and demonstrate the potential of significant energy savings.
§ RELATED WORKS
Deep Learning: Unsupervised learning of deep neural networks is a widely studied area in the machine learning community <cit.>. It can be roughly categorized into two main methods, namely clustering and association. Among many clustering algorithms, k-means <cit.>, or any variant of it <cit.>, is the most well-known and widely used method that groups features according to its similarities. Its applications can be found in practice across different domains <cit.>. Other approaches focus on associations to learn data representations which are described by a set of parameters using architectures such as autoencoders <cit.> (where the data distribution is learnt by encoding features in latent space).
In more recent works, such unsupervised learning methods have been applied to larger and more complex datasets <cit.>, making them applicable to more difficult problems. Further, recent advances in generative models have also provided opportunities at mapping unlabeled data to its underlying distribution, especially in the domain of image generation using Generative Adversarial Network (GAN) <cit.> with reconstruction loss directly <cit.> or using the auto-encoded latent space <cit.>. Dumoulin et al.'s recent effort at combining GAN and auto-encoder has demonstrated even better performance <cit.>.
Bio-Plausible Learning: Visual pattern recognition is also of great interest in the neuromorphic community <cit.>. In addition to standard supervised vision tasks, SNNs offer a unique solution to unsupervised learning - the STDP learning method <cit.>. In this scheme, the neural weight updates depend only on the temporal correlation between spikes without any guiding signals, which makes it essentially unsupervised. While it offers a bio-plausible solution, it is rarely used beyond MNIST-level tasks<cit.> and primarily used for single-layered networks. Going beyond conventional STDP based learning, Lee et al. <cit.> proposed an STDP-based pre-training scheme for deep networks that greedily trained the convolutional layers' weights, locally using STDP, one layer at a time but limited only to MNIST. Similarly, in Ferre et al.'s work <cit.>, the convolutional layers were trained on CIFAR10 and STL-10 with simplified STDP, but the layers were also trained individually with complex mechanisms. Further, their works are also limited to shallow convolutional architectures.
Our work explores a hybrid algorithm design based on a merger of the above two approaches. Our proposed framework provides a global training signal for the CNN using a straightforward and end-to-end STDP-based SNN implementation. We demonstrate significant accuracy improvement and computation savings for VGG-15 architecture on the Tiny ImageNet dataset in contrast to state-of-the-art deep clustering approaches.
§ PRELIMINARIES
§.§ Deep Clustering with k-means Algorithm
Deep Clustering <cit.> enabled unsupervised training of visual features primarily relies on the ability of clustering algorithms like the k-means to group together similar data points. k-means is a popular unsupervised algorithm for separating data points into distinct clusters. Given a user-specified value of k, the algorithm will find k clusters such that each data point is assigned to its nearest cluster. The vanilla implementation of the k-means algorithm iteratively calculates the Euclidean distance between points for comparison and updates the cluster centroids to fit the given distribution.
Deep Clustering utilizes the traditional CNN architecture to obtain the features to be used for clustering. The reason behind this feature reduction choice hinges upon the fact that a randomly initialized and untrained CNN outperforms a simple multilayer perceptron network by a considerable margin <cit.>. Driven by this observation, the main idea behind this framework is to bootstrap the better-than-chance signal to teach the network and learn the features. This teaching signal is transformed into a `pseudo-label' so that the network can learn from it. The `pseudo-labels' which may or may not be the same as the ground truth labels reflect the direction that the network weights should be updated. By doing so, the feature extraction layers may become slightly better at recognizing certain features and thereby producing more representative features. The improved features can ideally be more separable, thereby generating higher quality `pseudo-labels'. By repeating this process iteratively, the CNN should ideally converge by learning the `pseudo-labels' <cit.>.
Note that the CNN layers used for feature-reduction purposes can be converted into SNN layers with various methods as shown in many recent studies <cit.>, or trained from scratch using backpropagation through time (BPTT) <cit.> which opens up the potential for adopting the entire feature-reduction in a low-power neuromorphic setting. In this work, we therefore do not focus on the CNN-SNN conversion and train it by backpropagation without unrolling through time.
§.§ STDP Enabled Neuromorphic Clustering
STDP is an unsupervised learning mechanism that learns or unlearns neurons' synaptic connections based on spike timings <cit.>. In particular, the synaptic connection is strengthened when the post-synaptic neuron fires after the pre-synaptic neuron, and the connection is weakened if the post-synaptic neuron fires before the pre-synaptic neuron. The intuition behind STDP follows Hebbian learning philosophy where neurons that are activated together and sequentially are more spatio-temporally correlated and thus form a pattern, and vice versa. This learning rule enables the encoding of complex input distributions temporally without the need for guiding signals such as the label. The weights of the neuronal synapses are updated based on spike timings <cit.> as follows:
Δ w =
A_+e^-Δ t/β_+, if Δ t > 0
-A_-e^Δ t/β_-, if Δ t < 0
where, w is the weight, A_+/- are the learning rates, Δ t is the exact time difference between post-neuron and pre-neuron firing and β_+/- are the time-constants for the learning windows. In practical implementations, the exact spike timing is usually replaced with a spike trace (see Section IV-B) that decays over time to reduce memory storage for STDP implementation <cit.>.
STDP training is predominantly explored in Winner-Take-All networks in literature which consists of an excitatory layer of neurons with recurrent inhibitory connections <cit.> (see “STDP Enabled SNN for Clustering" sub-panel in Fig. <ref>). Such connections create a mechanism called `lateral inhibition' where activated neurons inhibit other neurons' activities and therefore assist the activated neurons to accentuate the learning process of its weights. To prevent any neuron from dominating the firing pattern, the second key mechanism is `homeostasis' which balances the overall activities of the neurons. Homeostasis prevents neurons from runaway excitation or total quiescence. One popular way to achieve this is through adaptive and decaying thresholding in which after every firing event, the firing threshold increases such that the firing neuron requires higher membrane potential to fire again in the future. Consequently, this will provide opportunities for other neurons in the network to fire and learn the synaptic weights. The critical balance of these two mechanisms ensures stable learning of the SNN. Fig. <ref> shows an example of STDP-trained weights of the excitatory neuron layer of an SNN where representative digit shapes are learnt without any label information for the MNIST dataset <cit.>. Each neuron in the network represents a cluster. By running inferences on the STDP network, we can cluster the inputs according to their corresponding most activated neuron. The learnt weights of each neuron is equivalent to the centroid of the cluster represented by that neuron.
§ METHODS
§.§ Proposed Deep-STDP Framework
As mentioned previously, the convolutional layers of the network compress the input images to a lower dimensional feature space as a one-dimensional vector. In abstract terms, the framework solves the following optimization problem <cit.>:
min _w ∈ℝ^d × k1/N∑_n=1^Nmin _y_n ∈{0,1}^k||f_θ (img_n) - w_y_n ||_1
such that y_n^⊺ 1_k = 1
where, N is the total number of training samples, y_n is the n-th optimal neuron assignment encoded as a one-hot vector, f_θ is the ConvNet forward pass output parameterized by its weights θ, img_n is the n-th input sample, w_y_n is the STDP-learnt synaptic weight map of the most activated neuron, d is the feature dimension of the ConvNet output and k is the number of neurons/clusters in the network. By minimizing the difference between the weights of the neurons and the patterns of the features, we can obtain an SNN that generates optimal assignments of y_n parameterized by weights w, which act as the pseudo-labels for our algorithm.
With the pseudo-labels, the network training can be accomplished through the standard minimization problem of network loss which can be described by:
min _ρ, θ1/N∑_n=1^Nℒ (g_ρ (f_θ (img_n)), y^*_n )
where, θ, ρ are parameters of the ConvNet f_θ (·) and classifier g_ρ (·) respectively, ℒ(·) is the loss function, img_n again is the n-th image input, y^*_n is the n-th optimal pseudo-label for this iteration.
However, SNNs only accept discrete spikes as input and therefore the ConvNet feature outputs in floating-point representation (after appropriate pre-processing like PCA reduction and l_2-normalization <cit.>) are subsequently rate encoded by a Poisson spike train generator, where the feature values are used as the Poisson distribution rate and sampled from the respective distribution.
At the end of the pseudo-label assignment, the STDP enabled SNN resets for the next iteration. This is intuitive since after the ConvNet weight update process, the feature distribution gets shifted and hence a new set of neuron/cluster weights should be learnt by the STDP framework. Algorithms <ref>-<ref> describe the overall structure of the proposed Deep-STDP framework shown in Fig. <ref>.
[1]// #1
[1]// #1
§.§ STDP Enabled SNN for Clustering
Clustering in the SNN is mediated through the temporal dynamics of Leaky-Integrate-Fire neurons in the excitatory layer. In the absence of any spiking inputs, the membrane potential of neurons in the excitatory layer is represented by V_exc at timestep t, or simply V_exc^t. It initializes with V_exc^t=0 = V_rest and decays as,
V_exc^t = V_rest + exp(1/V_decay) (V_exc^t-1-V_rest)
where, V_rest is the resting potential and V_decay is the potential decay constant.
Prior works <cit.> on using SNNs for clustering have mainly dealt with simple datasets without negative-valued features. This is in compliance with the nature of STDP learning for positive valued spikes. However, in our scenario, we consider negative valued spiking inputs as well in order to rate encode the negative features provided as output of the ConvNet. In order to enable STDP learning for negative inputs, we decompose the weight map into positive and negative components to learn positive and negative spike patterns respectively. Therefore, in presence of spikes, the excitatory layer's neuron membrane potential dynamics is updated as,
V_exc^t s^pre_+·w_+ + s^pre_-·w_-
where, the membrane potential is denoted by V^t_exc at timestep t, and the input spikes and pre-synaptic weights are represented by s^pre and w respectively (with their positive and negative counterparts). It is worth mentioning here that pre-neurons refer to the input neurons and post-neurons refer to the excitatory layer neurons since the synapses joining them are learnt by STDP.
Further, there is a refractory period L parameter for every neuron which will only allow execution of Eq. <ref> and <ref> if the refractory counter, l, equals `0'. A spike will be generated when the membrane potential at the current timestep is greater than the membrane threshold:
s=
1 if (V^t_exc > V_thr + ϵ) and (l=0)
0 otherwise
where, V_thr is the membrane threshold to fire a spike, ϵ is the adaptive threshold parameter, l is the refractory period counter which is reset to L upon a firing event and decays by 1 otherwise (thereby preventing neurons from firing for L timesteps after a spike). V^t_exc resets to V_reset after firing a spike. The adaptive threshold parameter acts as a balancer to prevent any neuron from being over-active (homeostasis) and is incremented by parameter α upon a firing event and otherwise decays exponentially at every timestep similar to Eq. <ref>: exp(1/ϵ_decay) ϵ. Every spike generated by a post-neuron triggers a membrane potential decrement by an amount w_inh for all the other neurons except itself.
In the context of our implementation, we used the spike trace τ to represent the temporal distance between two spikes. The spike trace value peaks at its firing to τ_o and exponentially decay as time lapses: exp(1/τ_decay) τ. The weight updates are similarly separated into positive and negative parts.
Pre-synaptic update:
[ Δ w_+ = -η^pre (s^pre_+ * τ^post); Δ w_- = η^pre (s^pre_- * τ^post) ]
Post-synaptic update:
[ Δ w_+ = η^post (τ^pre_+ * s^post); Δ w_- = η^post (τ^pre_- * s^post) ]
where, Δ w are the weight updates, η^pre, η^post are the learning rates for pre- and post-synaptic updates respectively, τ is the spike trace, and s is the spiking pattern. Superscript (^pre), (^post) indicates whether the trace or spike is from pre- or post-synaptic neuron respectively, and the subscript (_+), (_-) indicates whether the operation is for positive or negative input spikes. Note that the negative s^pre_- can be flipped easily by the distributive property of matrix multiplication.
§ EXPERIMENTS AND RESULTS
§.§ Datasets and Implementation
The proposed method was evaluated on the Tiny ImageNet dataset, which is a center-cropped subset of the large-scale ImageNet dataset <cit.>. Unlike the ImageNet 2012 dataset, which contains 1000 object categories, the Tiny ImageNet dataset comprises of only 200 categories. Due to computation constraints, we selected the first 10 classes from the Tiny ImageNet dataset by the naming order and considered both the training and testing sets for those corresponding classes in this work. All images were normalized to zero mean and unit variance and shuffled to avoid any bias. We chose VGG15 as the baseline network architecture with randomly initialized weights. Simulations were conducted using the PyTorch machine learning library and a modified version of the BindsNet toolbox <cit.> as the base platform for the experiments. The results reported for the DeepCluster framework <cit.> were obtained without any modification to the open-source codebase associated with the work, and its hyperparameters were unchanged unless mentioned in this work. The ConvNet learning rate was set to 1e-2 and the number of clusters was set to 10 times the number of classes (recommended as optimal in Ref. <cit.> and also found optimal in the Deep-STDP framework). The training was performed for 200 epochs. All results obtained were run on 2 GTX 2080Ti GPUs and the associated hyper-parameters used for the Deep-STDP framework can be found in Table <ref>.
Numerous cluster re-assignment frequencies were explored and `1' (`2') was found to be the optimal for Deep-STDP (DeepCluster), i.e. the pseudo-labels were generated by passing the entire dataset once (twice) every epoch. Note that this frequency represents the number of dataset iterations per epoch. Following the evaluation method proposed by Zhang et. al <cit.>, we froze all network parameters and trained a linear layer at the output to evaluate the efficiency of the model to capture the distribution of images in the training set as well as its usage as a pre-trained model for general use cases. We fixed the random seeds in each experiment such that the clustering process is deterministic for a particular run. To prevent loss in generality, all accuracy results reported here represent the average value over 5 independent runs with different sets of random seeds.
§.§ Evaluation Metrics
§.§.§ Fisher Information
The Fisher information (FI) quantitatively measures the amount of information retained in a statistical model after being trained on a given data distribution <cit.>. Many prior works have used this metric to measure different aspects of deep learning models including SNN models <cit.>. Unlike prior works, we use pseudo-labels to generate FI instead of ground-truth labels. FI reflects the impact of weight changes on the ConvNet output. If the FI of model parameters is small, we can conclude that the model's learning efficiency is poor since the weights can be pruned without affecting the output, and vice versa. Therefore, this metric implicitly measures the quality of the pseudo-labels.
Let us consider that the network tries to learn y from a distribution p parametrized by a set of weights θ. Given samples x, the posterior distribution is p_θ(y|x).
The Fisher information matrix (FIM) is defined as:
F=𝔼_x∼ X𝔼_y∼ p_θ(y|x) [∇_θlog p_θ(y|x) ∇_θlog p_θ(y|x)^T]
where, X is the empirical distribution of the actual dataset. However, the exact FIM is usually too large to be computed directly and therefore the value is usually approximated by its trace, which is given by:
Tr(F)=𝔼_x∼ X𝔼_y∼ p_θ(y|x) [||∇_θlog p_θ(y|x)||^2]
in which the expectations can be replaced by the averaged observation from the dataset of N samples:
Tr(F) = 1/N∑_k=1^N ||∇_θlog p_θ(y|x)||^2_2
where, Tr(F) is the trace of FIM, ∇ is the partial derivative operator. We follow the same implementation as the algorithm specified in Ref. <cit.>.
§.§.§ Normalized Mutual Information
Further, following the Deep Clustering work <cit.>, we also measured the Normalized Mutual Information (NMI) metrics to evaluate mutual information between two consecutive assignments of the STDP-enabled SNN, given by Eq. <ref>.
NMI(y^p,y^p-1) = I(y^p;y^p-1)/√([H(y^p)H(y^p-1)])
where, y^p,y^p-1 are label assignments for epoch p-1 and p respectively, I(·) is the mutual information function, and H(·) is the entropy function.
Since the assignments y^p, y^p-1 are consecutive and are generated from the same inputs, a high NMI value indicates a high correlation between the two sets of assignments as well as stable assignments of the pseudo-labels.
§.§ Performance Evaluation
Fig. <ref> demonstrates that Deep-STDP based unsupervised feature learning significantly outperforms DeepCluster approach based on k-means clustering. The superior quality of pseudo-labels generated by Deep-STDP is also explained empirically by the FIM trace variation over the learning epochs (see Fig. <ref>). While both algorithms perform similarly during the initial stages, the accuracy and FIM trace start improving significantly for the Deep-STDP approach over subsequent epochs. Performance evaluation metrics (NMI, FIM and Accuracy) for the two approaches at the end of the training process are tabulated in Table <ref>.
In addition to training an additional linear layer for numerical performance analysis, we also visualized the convolutional filter activations of the CNN trained using our proposed framework. We can observe from Fig. <ref> that the network forms distinct filters specialized for completely different visual patterns in different layers without using any ground truth label information. On the other hand, similar visualization performed on the DeepCluster trained network yielded similar simple patterns in the shallow layers without any complex patterns represented in the deeper layers, further substantiating the efficacy of the Deep-STDP approach.
§.§ Computational Cost Estimation
While a detailed system level hardware analysis for the two approaches is outside the scope of this work, we provide motivation for neuromorphic deep clustering by performing a comparative analysis of the computational cost of the two approaches.
§.§.§ Cost of k-means Clustering
To find the new centroid of a particular cluster, the algorithm calculates the averaged center of all the data points assigned to that cluster using the following equation:
c_j = 1/|C_j|∑x_i ∈ C_j
where, c_j is the averaged coordinates of the j-th centroid, |C_j| is the number of data points assigned to that corresponding cluster, and x_i is the i-th data point. Subsequently, the algorithm calculates the Euclidean distance between every data point and every centroid and assigns each data point to the cluster with the shortest distance to its centroid. The goal is to solve the optimization problem:
*argmin_C∑_j=1^k∑_i=1^|C_j| ||x_i - c_j||^2_2
where, *argmin_C solves for the optimal centroids and k is the total number of clusters.
The above two calculations will be repeated until convergence is achieved or until a maximum number of iterations is reached. Hence, the number of mathematical operations can be summarized as follows:
* Clustering Step: Compute the distance ||x_i - c_j||^2_2 from every point to every centroid and assign to k clusters
* Update Step: Re-center the centroids in new clusters by averaging over |C_j| for all clusters
* Repeat it times
To calculate the distance of a point x_i from c_j:
||x_i - c_j||^2_2 = √(∑_m=1^d=256(x_im - c_jm)^2)
where, d is the number of dimensions in the feature.
Hence, the number of multiplications (the number of squaring operations) in order to calculate the Euclidean distance is:
[k· d] · it · N
and the number of addition operations involved is:
[k· (2d-1) + d] · it · N
where, k is the number of clusters, N is the number of training samples, and it is the number of maximum iterations in the k-means algorithm. In Eq. <ref>, the k · (d-1) component arises from the summation of individual distance along each dimension while another k · d component arises from the subtraction operation for distance calculation along each dimension. The last d component arises from updating the new cluster coordinates (which in the worst case will iterate through all data points, see Eq. <ref>). Given the cost of float ADD operation is 0.9pJ and float MULT operation is 3.7pJ in 45nm CMOS process <cit.>, we estimated the total computational cost in the clustering process for every training epoch to be 14.1mJ (considering it=20). Considering 175 epochs of DeepCluster training to reach peak accuracy, the total computational cost is 2467.5mJ.
§.§.§ Cost of STDP Clustering
In the STDP based clustering approach, the computations can be summarized into the following parts:
* Feedforward Step: Integrate input Poisson spike train through the synapses connecting input and excitatory layer
* Learning Step: Updating the excitatory layer weights based on pre- and post-synaptic spiking activities
* Inhibition Step: Updating the neuron membrane potential based on lateral inhibitory connections
* Repeat T times
Although multiplication symbols were used in Algo. <ref>, computation with spike signals can always be reduced to summation operation since the spike magnitude is always `0' or `1' <cit.>. Further, the addition operation is conditional upon the receipt of spikes, thereby reducing the computation cost by a significant margin for a highly sparse spike train. For instance, the average spiking probability per neuron per timestep in the excitatory layer of the network is only 0.19%. Hence, the total number of addition operations can be summarized as:
[p_input· |w^exc| + (p_input + p_exc)· |w^exc| + p_exc· |w^inh|]
· T · N
where, p_input,p_exc are the average (per neuron per timestep averaged over the entire training process) spiking probability of the input and excitatory neuronal layer respectively, |w^exc| is the number of synaptic connections between the input and excitatory layer, either |w_+| or |w_-| since the input can be either positive or negative, |w^inh| is the total number of inhibitory connections in the network, T is the number of timesteps used for the STDP training process, and N is the number of training samples.
It is worth mentioning here that we primarily focus on the computationally expensive portions of both algorithms for these calculations. In Eq. <ref>, the p_input· |w^exc| component arises from the feedforward propagation of input spikes, (p_input + p_exc)· |w^exc| component arises from the learning step and p_exc· |w^inh| arises from the inhibition step. Therefore, the total computational cost for Deep-STDP per epoch is 55.34mJ and considering 50 epochs of training (iso-accuracy comparison as shown in Fig. <ref>), the total energy consumption is estimated to be 2767.2mJ - comparable to the DeepCluster framework.
§.§.§ System Level Cost Comparison:
We note that the STDP based framework does not change the computational load of the clustering framework significantly. However, the computational load at the system level will be also dependent on the computational load for feature extraction in the ConvNet. For instance, Ref. <cit.> mentions a third of the time during a forward pass is attributed to the clustering algorithm while the remaining is attributed to the deep ConvNet feature extraction. Therefore, we expect the Deep-STDP based framework to be significantly more resource efficient than the DeepCluster based approach due to 3.5× reduction in the number of training epochs - equivalently reducing the ConvNet feature extraction computational cost.
§ CONCLUSIONS
In conclusion, we proposed an end-to-end hybrid unsupervised framework for training deep CNNs that can be potentially implemented in a neuromorphic setting. We demonstrated significant benefits in terms of accuracy and computational cost by leveraging bio-plausible clustering techniques for deep unsupervised learning of visual features and substantiated our claims by empirical analysis through statistical tools like Fisher Information and Normalized Mutual Information. Our work significantly outperforms prior attempts at scaling bio-inspired learning rules like STDP to deeper networks and complex datasets. Future work can focus on further scaling of the approach and delving deeper into the mathematical underpinnings of the superior performance of STDP as a deep clustering mechanism.
§ ACKNOWLEDGMENTS
This material is based upon work supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under Award Number #DE-SC0021562 and the National Science Foundation grant CCF #1955815 and by Oracle Cloud credits and related resources provided by the Oracle for Research program.
10
url@samestyle
ding2004k
C. Ding and X. He, “K-means clustering via principal component analysis,” in
Proceedings of the twenty-first international conference on Machine
learning, 2004, p. 29.
csurka2004visual
G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual
categorization with bags of keypoints,” in Workshop on statistical
learning in computer vision, ECCV, vol. 1, no. 1-22.1em plus 0.5em
minus 0.4emPrague, 2004, pp. 1–2.
caron2018deep
M. Caron, P. Bojanowski, A. Joulin, and M. Douze, “Deep clustering for
unsupervised learning of visual features,” in Proceedings of the
European conference on computer vision (ECCV), 2018, pp. 132–149.
radford2015unsupervised
A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning
with deep convolutional generative adversarial networks,” arXiv
preprint arXiv:1511.06434, 2015.
oord2018representation
A. v. d. Oord, Y. Li, and O. Vinyals, “Representation learning with
contrastive predictive coding,” arXiv preprint arXiv:1807.03748,
2018.
radford2019language
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al.,
“Language models are unsupervised multitask learners,” OpenAI blog,
vol. 1, no. 8, p. 9, 2019.
sengupta2019going
A. Sengupta, Y. Ye, R. Wang, C. Liu, and K. Roy, “Going deeper in spiking
neural networks: Vgg and residual architectures,” Frontiers in
neuroscience, vol. 13, p. 95, 2019.
davies2021advancing
M. Davies, A. Wild, G. Orchard, Y. Sandamirskaya, G. A. F. Guerra, P. Joshi,
P. Plank, and S. R. Risbud, “Advancing neuromorphic computing with loihi: A
survey of results and outlook,” Proceedings of the IEEE, vol. 109,
no. 5, pp. 911–934, 2021.
diehl2015unsupervised
P. Diehl and M. Cook, “Unsupervised learning of digit recognition using
spike-timing-dependent plasticity,” Frontiers in Computational
Neuroscience, vol. 9, p. 99, 2015.
saha2021intrinsic
A. Saha, A. Islam, Z. Zhao, S. Deng, K. Ni, and A. Sengupta, “Intrinsic
synaptic plasticity of ferroelectric field effect transistors for online
learning,” Applied Physics Letters, vol. 119, no. 13, 2021.
frady2020neuromorphic
E. P. Frady, G. Orchard, D. Florey, N. Imam, R. Liu, J. Mishra, J. Tse,
A. Wild, F. T. Sommer, and M. Davies, “Neuromorphic nearest neighbor search
using intel's pohoiki springs,” in Proceedings of the neuro-inspired
computational elements workshop, 2020, pp. 1–10.
bengio2012unsupervised
Y. Bengio, A. C. Courville, and P. Vincent, “Unsupervised feature learning and
deep learning: A review and new perspectives,” CoRR, abs/1206.5538,
vol. 1, no. 2665, p. 2012, 2012.
dike2018unsupervised
H. U. Dike, Y. Zhou, K. K. Deveerasetty, and Q. Wu, “Unsupervised learning
based on artificial neural network: A review,” in 2018 IEEE
International Conference on Cyborg and Bionic Systems (CBS).1em plus
0.5em minus 0.4emIEEE, 2018, pp. 322–327.
lloyd1982least
S. Lloyd, “Least squares quantization in pcm,” IEEE transactions on
information theory, vol. 28, no. 2, pp. 129–137, 1982.
krishna1999genetic
K. Krishna and M. N. Murty, “Genetic k-means algorithm,” IEEE
Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
vol. 29, no. 3, pp. 433–439, 1999.
arthur2007k
D. Arthur and S. Vassilvitskii, “K-means++ the advantages of careful
seeding,” in Proceedings of the eighteenth annual ACM-SIAM symposium
on Discrete algorithms, 2007, pp. 1027–1035.
ng2006medical
H. Ng, S. Ong, K. Foong, P.-S. Goh, and W. Nowinski, “Medical image
segmentation using k-means clustering and improved watershed algorithm,” in
2006 IEEE southwest symposium on image analysis and
interpretation.1em plus 0.5em minus 0.4emIEEE, 2006, pp.
61–65.
kim2008recommender
K.-j. Kim and H. Ahn, “A recommender system using ga k-means clustering in an
online shopping market,” Expert systems with applications, vol. 34,
no. 2, pp. 1200–1209, 2008.
rumelhart1986learning
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations
by back-propagating errors,” nature, vol. 323, no. 6088, pp.
533–536, 1986.
hinton2006reducing
G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data
with neural networks,” science, vol. 313, no. 5786, pp. 504–507,
2006.
rombach2022high
R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution
image synthesis with latent diffusion models,” in Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.
10 684–10 695.
bojanowski2017optimizing
P. Bojanowski, A. Joulin, D. Lopez-Paz, and A. Szlam, “Optimizing the latent
space of generative networks,” arXiv preprint arXiv:1707.05776, 2017.
kingma2013auto
D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv
preprint arXiv:1312.6114, 2013.
masci2011stacked
J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, “Stacked
convolutional auto-encoders for hierarchical feature extraction,” in
Artificial Neural Networks and Machine Learning–ICANN 2011: 21st
International Conference on Artificial Neural Networks, Espoo, Finland, June
14-17, 2011, Proceedings, Part I 21.1em plus 0.5em minus 0.4emSpringer, 2011, pp. 52–59.
diehl2015fast
P. U. Diehl, D. Neil, J. Binas, M. Cook, S.-C. Liu, and M. Pfeiffer,
“Fast-classifying, high-accuracy spiking deep networks through weight and
threshold balancing,” in 2015 International joint conference on neural
networks (IJCNN).1em plus 0.5em minus 0.4emieee, 2015, pp.
1–8.
neftci2014event
E. Neftci, S. Das, B. Pedroni, K. Kreutz-Delgado, and G. Cauwenberghs,
“Event-driven contrastive divergence for spiking neuromorphic systems,”
Frontiers in neuroscience, vol. 7, p. 272, 2014.
lee2018pretrain
C. Lee, P. Panda, G. Srinivasan, and K. Roy, “Training deep spiking
convolutional neural networks with stdp-based unsupervised pre-training
followed by supervised fine-tuning,” Frontiers in Neuroscience,
vol. 12, 2018.
liu2019stdpLearning
D. Liu and S. Yue, “Event-driven continuous stdp learning with deep structure
for visual pattern recognition,” IEEE Transactions on Cybernetics,
vol. 49, no. 4, pp. 1377–1390, 2019.
ferre2018unsupervised
P. Ferré, F. Mamalet, and S. J. Thorpe, “Unsupervised feature learning
with winner-takes-all based stdp,” Frontiers in computational
neuroscience, vol. 12, p. 24, 2018.
noroozi2016unsupervised
M. Noroozi and P. Favaro, “Unsupervised learning of visual representations by
solving jigsaw puzzles,” in Computer Vision–ECCV 2016: 14th European
Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings,
Part VI.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 69–84.
midya2019artificial
R. Midya, Z. Wang, S. Asapu, S. Joshi, Y. Li, Y. Zhuo, W. Song, H. Jiang,
N. Upadhay, M. Rao et al., “Artificial neural network (ann) to
spiking neural network (snn) converters based on diffusive memristors,”
Advanced Electronic Materials, vol. 5, no. 9, p. 1900060, 2019.
lu2020exploring
S. Lu and A. Sengupta, “Exploring the connection between binary and spiking
neural networks,” Frontiers in neuroscience, vol. 14, 2020.
lu2022neuroevolution
——, “Neuroevolution guided hybrid spiking neural network training,”
Frontiers in neuroscience, vol. 16, 2022.
gao2023high
H. Gao, J. He, H. Wang, T. Wang, Z. Zhong, J. Yu, Y. Wang, M. Tian, and C. Shi,
“High-accuracy deep ann-to-snn conversion using quantization-aware training
framework and calcium-gated bipolar leaky integrate and fire neuron,”
Frontiers in Neuroscience, vol. 17, p. 1141701, 2023.
bellec2018long
G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, “Long
short-term memory and learning-to-learn in networks of spiking neurons,”
Advances in neural information processing systems, vol. 31, 2018.
Rathi2020DIETSNNDI
N. Rathi and K. Roy, “DIET-SNN: Direct input encoding with leakage and
threshold optimization in deep spiking neural networks,” ArXiv, vol.
abs/2008.03658, 2020.
caporale2008spike
N. Caporale and Y. Dan, “Spike timing–dependent plasticity: a hebbian
learning rule,” Annu. Rev. Neurosci., vol. 31, pp. 25–46, 2008.
Hazan_2018
H. Hazan, D. J. Saunders, H. Khan, D. Patel, D. T. Sanghavi, H. T. Siegelmann,
and R. Kozma, “Bindsnet: A machine learning-oriented spiking neural networks
library in python,” Frontiers in Neuroinformatics, vol. 12, p. 89,
2018.
deng2012mnist
L. Deng, “The mnist database of handwritten digit images for machine learning
research,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp.
141–142, 2012.
deng2009imagenet
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A
large-scale hierarchical image database,” in Computer Vision and
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on.1em plus
0.5em minus 0.4emIEEE, 2009, pp. 248–255.
zhang2016colorful
R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” in
Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The
Netherlands, October 11-14, 2016, Proceedings, Part III 14.1em plus
0.5em minus 0.4emSpringer, 2016, pp. 649–666.
amari2000methods
S.-i. Amari and H. Nagaoka, Methods of information geometry.1em
plus 0.5em minus 0.4emAmerican Mathematical Soc., 2000, vol. 191.
karakida2019universal
R. Karakida, S. Akaho, and S.-i. Amari, “Universal statistics of fisher
information in deep neural networks: Mean field approach,” in The 22nd
International Conference on Artificial Intelligence and Statistics.1em plus 0.5em minus 0.4emPMLR, 2019, pp. 1032–1041.
kim2022exploring
Y. Kim, Y. Li, H. Park, Y. Venkatesha, A. Hambitzer, and P. Panda, “Exploring
temporal information dynamics in spiking neural networks,” arXiv
preprint arXiv:2211.14406, 2022.
erhan2009visualizing
D. Erhan, Y. Bengio, A. Courville, and P. Vincent, “Visualizing higher-layer
features of a deep network,” University of Montreal, vol. 1341,
no. 3, p. 1, 2009.
han2015learning
S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections
for efficient neural network,” Advances in neural information
processing systems, vol. 28, 2015.
|
http://arxiv.org/abs/2307.06069v1 | 20230712103858 | Non-semisimple link and manifold invariants for symplectic fermions | [
"Johannes Berger",
"Azat M. Gainutdinov",
"Ingo Runkel"
] | math.QA | [
"math.QA",
"math.GT",
"16T99"
] |
my line = [smooth, line width=0.25mm]
Acceleration of complex matrix multiplication using arbitrary precision floating-point arithmetic
Tomonori Kouya
Shizuoka Institute of Science and Technology
Fukuroi, Japan
ORCID: 0000-0003-0178-5519
13th July, 2023
===============================================================================================================
empty
Johannes Berger ^a, Azat M. Gainutdinov ^b and Ingo
Runkel ^c
^a Département de Mathématique, Université Libre de Bruxelles,
Campus de la Plaine, Boulevard du Triomphe, 1050 Bruxelles, Belgium
^b Institut Denis Poisson, CNRS, Université de Tours,
Parc de Grandmont, 37200 Tours, France
^c Fachbereich Mathematik, Universität Hamburg
Bundesstraße 55, 20146 Hamburg, Germany
Emails:
[email protected],
[email protected],
[email protected]
We consider the link and three-manifold invariants
in <cit.>, which are defined
in terms of certain non-semisimple finite ribbon categories together with a choice of tensor ideal and modified trace.
If the ideal is all of , these invariants agree with those defined by Lyubashenko in the 90's.
We show that in that case the invariants
depend on the objects labelling the link only through their simple composition factors,
so that
in order to detect non-trivial extensions one needs to pass to proper ideals.
We compute examples of link and three-manifold invariants for being the category of N pairs of symplectic fermions.
Using a quasi-Hopf algebra realisation of ,
we find that the Lyubashenko-invariant of a lens space is equal to the order of its first homology group to the power N, a relation we conjecture to hold for all rational homology spheres.
For N ≥ 2, allows for tensor ideals with a modified trace which are different from all of and from the projective ideal.
Using the theory of pull-back traces and symmetrised cointegrals,
we show that the link invariant obtained from can distinguish a
continuum of indecomposable but reducible objects which all have the same composition series.
§ INTRODUCTION
While a large body of research on semisimple Reshetikhin-Turaev type invariants
of three-manifolds
has been accumulated since their inception in the 90ies, much less is known about quantum invariants obtained from non-semisimple input data. After initial successes of obtaining three-manifold invariants from non-semisimple Hopf algebras by Hennings <cit.>, and from modular tensor categories which need not be semisimple by Lyubashenko <cit.>, the next significant step forward came only with the introduction of modified traces on tensor ideals and using these to define link invariants by Geer, Kujawa, Patureau-Mirand, and Turaev
<cit.>.
This led to the definition of three-manifold invariants
by Costantino, Geer, and Patureau-Mirand <cit.>,
and three-dimensional topological field theories via non-semisimple modular tensor categories and variants thereof <cit.>. The non-semisimple topological field theories of <cit.> recover the original Reshetikhin-Turaev theory in the semisimple case, as well as the mapping class group actions found by Lyubashenko in the non-semisimple case <cit.>.
Our focus here is on non-semisimple link and three-manifold invariants,
and not on topological field theory.
The purpose of the present paper is two-fold:
(a) We exhibit a number of general properties of the non-semisimple invariants defined in <cit.> which set them apart from the semisimple case, and from Lyubashenko's invariant.
(b) We take one example of a modular tensor category –
that of symplectic fermions <cit.> – as input datum and provide explicit computations of some basic invariants. We hope this example based exposition can be useful as a guide to the rather technical definition of the invariants in <cit.>.
If we should single out one property as particularly interesting then it would be that
non-semisimple invariants are able to distinguish a continuum of indecomposable but reducible objects which all have the same composition series.
In other words, these invariants are able to detect extensions and not just the composition series, in contrast to the Lyubashenko invariant.
This paper splits into two parts of roughly equal length.
The first part consists of Sections <ref> and <ref> and contains the result for point (a) and summarises the findings of the explicit computations in (b). The second part, Sections <ref>–<ref>, provides all the technical and lengthy details that went into the results in (b).
We will now give a quick overview of the contents of the first part of the paper. Throughout this introduction we fix
: a finite ribbon tensor category over an alg. closed field k .
In particular, only has a finite number of
(isomorphism classes of)
simple objects, the tensor unit is simple,
and each simple object has a projective cover with finite composition series.
The ribbon structure endows with a pivotal structure and a braiding, which may or may not be degenerate.
§.§ Invariants of ribbon graphs
In Sections <ref> we focus on invariants of ribbon graphs in [R]^3 (or, equivalently, in S^3). These depend on the choice of a tensor ideal in and on a modified trace on . Let us describe some properties of these ingredients and the resulting invariants in turn.
§.§.§ Tensor ideals
A tensor ideal ⊂ is a full subcategory that is closed
under tensoring with arbitrary objects in and under taking direct summands in the sense that if X ⊕ Y ∈, then also X,Y ∈.
The two canonical examples are = and =, the full subcategory of projective objects in . In fact, any tensor ideal satisfies
⊂⊂ ,
and ⊊ iff is not semisimple (<Ref>). Thus the freedom to choose a tensor ideal is present only for non-semisimple.
In this work, the main sources of tensor ideals that are not or , and that we call intermediate, are given by subcategories of all objects in with the property that their image under a monoidal functor →𝒟 is in a tensor ideal of 𝒟, e.g. in Proj(𝒟).
We refer to such ideals as pullback ideals.
§.§.§ Modified traces
A modified trace on a tensor ideal is a family of
linear maps
( _X _(X) →)_X ∈
which induce symmetric pairings (X,Y) ×(Y,X) → k for X,Y ∈, and which satisfy a partial trace compatibility condition with the categorical trace on ,
see <cit.>.
In fact, the categorical trace _X(f), f ∈_(X) of – defined in terms of the duality maps of –
is the unique-up-to-scalar modified trace on all of .
The usefulness of modified traces derives from the observation that while the categorical trace is identically zero on any proper tensor ideal (<Ref>), it may be possible to find non-zero modified traces on a given tensor ideal .
We now assume that
is in addition unimodular ,
that is, the projective cover P_→ of the tensor unit also has socle .
Then there is a unique-up-to-scalar modified trace on and that for each non-zero such trace the above symmetric pairing is non-degenerate, see
<cit.> for the case of Hopf algebras
and <cit.> for the general case.
An important way to obtain intermediate tensor ideals with modified traces is
via pullback
of a tensor ideal with modified trace in a pivotal tensor category 𝒟 along a pivotal monoidal functor →𝒟,
see <cit.>
and <Ref>.
§.§.§ Invariants of -admissible ribbon graphs
Let ⊂ be a tensor ideal and _X a modified trace on .
A -coloured ribbon graph T in [R]^3 is called
-admissible if at least one edge is coloured by an object
X ∈. For such a T one obtains an invariant (T) of T by
cutting an -labelled edge and taking the modified trace
(see <cit.> and <Ref>). In particular, this prescription is independent of which -labelled edge is chosen in the cutting procedure. For any W-labelled edge that is a framed knot (i.e. does not touch a coupon) and which is not used in the cutting procedure, the invariant depends only on the class of W in the Grothendieck ring of (<Ref>).
From this one can deduce that
if T is a framed link (a ribbon graph without coupons), in order to detect any structure of the objects colouring the link other than their composition series, one needs to work with a proper ideal ⊊.
This illustrates once more that tensor ideals and modified traces are crucial for non-semisimple invariants.
§.§.§ Symplectic fermion example
To see how these non-semisimple invariants behave in a concrete example, we consider the category [] of finite-dimensional left
modules over = (N, β), the
symplectic fermion quasi-Hopf algebra.
As an algebra it is the semidirect product of [Z]_2 with the direct sum of a Graßmann algebra and a Clifford algebra in 2N generators each.
The parameter β∈[C] satisfies β^4 = (-1)^N.
Symplectic fermions were introduced in the context of two-dimensional conformal field theory <cit.> and the vertex operator algebra was studied in <cit.>. A corresponding
modular
tensor category was proposed in <cit.> and was expressed as representations of a
factorisable
ribbon quasi-Hopf algebra in <cit.>. In <cit.> it was shown that for N=1, [] is indeed ribbon-equivalent to the representation category of the symplectic fermion vertex operator algebra, see also <cit.> for recent progress on general N.
The categories []
are comparatively easy to handle. For example, they always have four simple objects, but the maximal
composition length of indecomposable projective modules
increases with N and is given by 2^2N. The category [] is a finite ribbon tensor category and is modular, i.e. the braiding satisfies a non-degeneracy condition.
The symplectic fermion quasi-Hopf algebra can be understood as a version of the quantum group
of type B_N
at q=i <cit.>. Note that the parameter is the rank while the root of unity is kept fixed. In the other class of standard examples, namely small quantum groups for sl(2), as in <cit.>,
the parameter is the order of the root of unity. These two classes of examples probe different aspects of non-semisimple invariants: For B_N the number of simple objects is always 4 and the maximal composition length
of their projective covers
increases, while for sl(2) the maximal composition length
of projective covers
is 4, but the number of simple objects increases.
For [] we compute invariants of the unknot
and the Hopf link with different framings,
and of torus knots for the tensor ideals [], [[]] and, importantly, also for a choice of intermediate ideal (in the case N=2).
The ideal and a modified trace on it are obtained via the pullback construction
specialised to quasi-Hopf algebras, see Section <ref>.
The results are listed in <Ref> from Section <ref>.
Most interestingly, the intermediate ideal we consider contains a continuum of mutually non-isomorphic indecomposable objects which all have the same composition series made out of 4 simple objects.
The family we consider can be conveniently parameterised by a complex 2 × 2-matrix
∈Mat_2([C]),
and we denote the corresponding representation of by P_∈.
It turns out that
the invariants can depend continuously on . For example the invariant of O(n,P_), the n-framed unknot coloured by P_, is
(O(n,P_)) = 2n(1+()) .
Such a dependence on continuous parameters can never happen
for the extremal choices = and = for the tensor ideal in a finite tensor category,
see <Ref> for more details.
Let us also mention that in some cases, the invariants based on the intermediate ideal are topologically stronger (i.e. able to distinguish more knots) than those based on the ideal , see a discussion in the last item (3) in Section <ref>.
§.§ Invariants of three-manifolds
In Section <ref> we consider invariants of
(closed, oriented)
three-manifolds with embedded
-admissible
ribbon graphs.
As above, is a unimodular ribbon finite tensor category, ⊂ a tensor ideal, and a modified trace on .
We equally consider
Lyubashenko invariants of three-manifolds.
For the latter there are no restrictions on the embedded ribbon graph, in particular it is allowed to be empty.
§.§.§ Surgery invariants
The invariants we study are obtained as surgery invariants, and we need to extend our definition of the invariants
of -admissible
ribbon graphs to so-called bichrome graphs. These consist of an ordinary ribbon graph, called “blue” and the surgery link called “red”. The red component does not carry further labels.
The invariant of bichrome graphs needs another ingredient
to evaluate the red part,
namely the universal Hopf algebra in , defined as a coend, and the integral →, unique up to a scalar. We review the slightly intricate construction of the corresponding invariant (T) from <cit.> in <Ref>.
In order to obtain invariants of three-manifolds we need another condition, namely that
is twist-nondegenerate ,
which is satisfied
by every modular tensor category
<cit.>,
and in particular by [].
This condition means that (O_±) ≠ 0, where O_± is the red ±1-framed unknot, and this is necessary to achieve invariance under the Kirby 1-move or stabilisation.
One can show that
(M,T)
= ^-1-ℓ(L)^-σ(L)( L ∪ T )
is a topological invariant of the pair (M,T). Here, M is a three-manifold, T is a (blue) -admissible ribbon graph embedded in M, L is a surgery link representing M, and ℓ(L) and σ(L) are the number of components and linking number of the surgery link L, respectively.
See <cit.> and <Ref> for details and for the definition of the constants and .
The original non-semisimple invariant defined by Lyubashenko <cit.> is recovered by taking = and _X to be the categorical trace _X:
(M,T) = (M,T) .
The strong point of Lyubashenko's invariant is that it can be defined for just the three-manifold M without an embedded ribbon graph (since one can always insert a loop labelled by ). On the downside, (M,T) is often zero. For example (S^2 × S^1,∅) = 0 iff is non-semisimple (<Ref>), and (M,T) = 0
for all M
if T contains an edge labelled by an object in a proper tensor ideal (<Ref>).
The price to pay to do better than (M,T) is that one needs to include an -admissible ribbon graph in M. Furthermore, the ribbon graph must
wrap non-trivially around the surgery link of M,
or else the invariant reduces to (M,∅) times the link invariant (T) discussed above (<Ref>).
§.§.§ Lens space invariants in the symplectic fermion example
Fix two positive coprime integers p,q.
The lens spaces 𝔏(p, q) are by definition a quotient of S^3 = SU(2) by a q-dependent action of [Z]/p[Z]. A surgery presentation can be obtained from a continued fraction expansion of p/q. We give a general expression for
(𝔏(p, q),T), where T is a loop bounding a disc (thus reducing to Lyubashenko's invariant
as given
for 𝔏(p, q) already in <cit.>), and where T wraps a non-trivial cycle, i.e. is linked non-trivially with L (<Ref>).
In the symplectic fermion example = [], = (N, β) we find
(𝔏(p, q)) = p^N .
Since |H_1(𝔏(p, q))| = p, this leads us to
conjecture: (M) = 0 ; H_1(M) infinite
|H_1(M)|^N ; else
for all closed 3-manifolds M.
This conjecture for symplectic fermions fits into a series of observations relating non-semisimple and semisimple invariants which we briefly recall in <Ref>, where we also discuss a possible generalisation to other modular tensor categories.
Next we choose the projective ideal and T the loop wrapping a (specific) non-trivial cycle coloured by the projective cover P_ of the tensor unit. We get
(𝔏(p, q),T) = 12 c + 12 β^2 q^N ,
where c ∈{0}∪{β^m | m∈[Z] } is determined recursively from the continued fraction expansion (<Ref>).
Note that this is no longer just an invariant of the lens space 𝔏(p, q), but it depends on our choice of cycle wrapped by T. Indeed, 𝔏(p, q) ≅𝔏(p, q+p) as manifolds, but the above expression is not invariant under this substitution.
The most interesting case is again that of the intermediate ideal (here again N=2). In that case we find for T the non-trivial loop as above and labelled by P_, ∈Mat_2([C]) (<Ref>):
(𝔏(p, q),T) = -2pq(1+()) .
In fact, by adding a coupon given by acting with elements
z
of the centre of (2, β), the lens space invariants obtained by varying z allow one to recover all entries of the matrix .
To the best of our knowledge, our results provide the first explicit computation of the behaviour of quantum invariants under variation of the choice of tensor ideal, including an intermediate tensor ideal.
It would also be interesting to calculate invariants associated to
ideals with modified traces
in pivotal tensor categories as
constructed in <cit.>, but this is outside the scope of our paper.
§.§ Detailed computations
In <Ref>, we review in detail our conventions for
(pivotal,
quasi-triangular, ribbon)
quasi-Hopf algebras H.
Then we recall the
construction of
modified traces on the projective ideal of the category [H] of finite-dimensional left H-modules.
This uses the theory <cit.> of symmetrised (co)integrals of H
which is a quasi-Hopf generalisation of the construction in <cit.>.
Using the right integral of H, in Section <ref> we calculate the integral → of the universal Hopf algebra ∈, following <cit.>, and characterise the twist-nondegeneracy condition in quasi-Hopf terms.
We then describe in Section <ref> how to get
intermediate
tensor ideals and modified traces from suitable quasi-Hopf subalgebras,
see the precise statement in <Ref>.
In <Ref> we review the definition of the family of non-semisimple symplectic fermion quasi-Hopf
algebras from <cit.>.
We show that [] is prime in the sense of
<cit.>.
Then we return to the pullback construction, and give an explicit example of a
sub-quasi Hopf algebra A in H = (2, β).
We introduce a family of H-modules P_, bijectively parameterised by complex (2
× 2)-matrices , which are lifts of the projective cover of the tensor unit in
[A].
After having established some properties of these modules, we compute the pullback
modified trace on the pullback ideal of [[A]]
in [H].
<Ref> consists of the computations of the
various
(framed)
link invariants first presented at the end of
<Ref>.
<Ref> contains expressions for lens space invariants (both for empty manifolds and with -admissible ribbon graphs) in terms of quasi-Hopf data, see <Ref>, where all tensor ideals are treated on equal footing via certain (possibly degenerate) bilinear form on the centre of the quasi-Hopf algebra. Then, we finally give the explicit computation
for [] using the SL(2,ℤ) action on the centre of computed in <cit.>.
Acknowledgements.
The authors would like to thank Christian Blanchet, Marco de Renzi, Ehud Meir, Vincentas Mulevičius,
and Joost Vercruysse for answers, comments, remarks, and helpful discussions.
JB is supported by the Fédération Wallonie-Bruxelles through a postdoctoral fellowship in the framework of the actions de recherche concertées-grant
“From algebra to combinatorics, and back”.
The work of AMG was supported by the CNRS, and partially by the ANR grant JCJC ANR-18-CE40-0001 and the RSF Grant No. 20-61-46005.
AMG is also grateful to Hamburg University for its kind hospitality in 2022.
IR is partially supported by the Deutsche Forschungsgemeinschaft via the Cluster of
Excellence EXC 2121 “Quantum Universe” - 390833306.
Conventions.
Throughout an algebraically closed field is fixed, and all linear structures
will be considered over it.
All functors between linear categories are assumed to be linear.
§ LINK INVARIANTS FROM TENSOR IDEALS AND MODIFIED TRACES
In this section we start by reviewing some parts of the theory of tensor ideals and modified traces and how to use these to define link invariants, as developed in
<cit.>.
We prove some general properties of the resulting invariants. The main result in this section is the observation that – in the example we consider – working with an intermediate tensor ideal
allows one to detect internal structure of indecomposable
(but reducible)
objects not visible to invariants build from either of the two canonical ideals, i.e. the whole category or the subcategory of projective objects.
Invariants for the intermediate ideal may also distinguish knots which the two canonical ideals cannot, in our example certain knots from their mirrors.
§.§ Finite tensor categories
Following <cit.> we mean by a finite tensor category a linear abelian
category which
* is finite.
That is:
it has finite-dimensional Hom-spaces;
every object has finite length;
it has finitely many isomorphism classes of simple objects;
every object has a projective cover.
[
Recall that P ∈ is projective iff the associated covariant Hom-functor
(P, ) is exact.
A projective cover of an object X ∈ is an epimorphism p_X P_X
→ X with projective source such that every epimorphism P → X with P
projective factors epimorphically through p_X.
]
* is monoidal with bilinear tensor product and simple tensor unit
,
* is rigid, i.e. each object has a left and a right dual with corresponding evaluation
and coevaluation morphisms, see below.
A fusion category is a semisimple finite tensor category.
Without loss of generality, we will assume monoidal categories to be strictly
unital and associative.[
The strictness assumption is made only in Sections <ref> and <ref>.
In the technical example computations in Sections <ref>–<ref> we work with quasi-Hopf algebras, and their representation categories are non-strict.]
For the rest of this section is a finite tensor category.
We denote by a set of representatives of isomorphism classes of simple
objects, and agree that ∈.
A choice of projective cover of a simple object U ∈ shall be denoted by
UU→ U.
Our notation for left and right duals of an object X is X and X,
and the evaluation and coevaluation morphisms are
_X X X →
, _X X X→
and
_X → X X
, _X →X X
,
respectively.
We will employ graphical calculus in the form of string diagrams, which in our convention are read
from bottom to top.
Thus e.g. the left evaluation and coevaluation morphisms for X are drawn as
_X =
.5em-0.5./img/evalL.pdf1.3 (-65,-30) X (-08,-30) X _X =
.5em-0.8./img/coevalL.pdf1.3 (-65,10) X (-05,10) X .
To is associated its Grothendieck ring .
This is, as an abelian group, generated by objects of modulo the relations X +
Z = Y in iff there is a short exact sequence 0 → X → Y
→ Z → 0 in ;
we denote the class of X ∈ in by X.
The multiplication determined by X·Y =
X Y is well-defined because is exact by
<cit.>.
The Grothendieck ring is in fact freely generated over [Z] by :
every object X ∈ possesses a finite composition series, in which the simple
object U occurs XU = _(U, X) times,
see <cit.>.
Thus the class of any X ∈ can be written as a sum over classes of simple
objects U with multiplicity XU.
In particular, there are structure constants UVW =
U VW∈[N] such that
X·Y = ∑_U, V, W ∈XU YV UVW W
for all X, Y ∈.
§.§ Tensor ideals
A right tensor ideal of (or right ideal for short)
is a full subcategory such that
* for all X ∈ and V∈, also X V ∈, and
* it is closed under retracts, i.e. if Z ∈ and Z ≅ X ⊕ Y, then also X, Y ∈.
Note that by the second condition, ideals are replete, i.e. if X ≅ Y with X ∈, then Y ∈.
The definition of left and two-sided ideals in is analogous.
We will refer to two-sided ideals just as ideals.
Two trivial examples of ideals are the subcategory consisting of only the zero object,
as well as the category itself.
An important example of an ideal is the subcategory of projective objects, which we
denote by . Indeed, closure under tensor products from both sides follows from <cit.>, and closure under retracts is the statement that direct summands of projective objects are again projective.
A category is braided if it is equipped
with a natural isomorphism with components
_X,Y = -0.4./img/braiding.pdf1.5 (-52,-29) X (-05,-29) Y (-48,032) Y (-05,032) X X Y Y X
called the braiding, satisfying the hexagon axioms, see
e.g. <cit.>.
The inverse is drawn with the opposite crossing.
Since ideals are replete, there is no distinction between left, right and two-sided ideals in a braided category.
Let be an ideal in a braided category. Then is closed under taking duals, i.e. X ∈ implies X, X∈, see <cit.>. For example, the zig-zag identities for the left duality maps exhibit X as a retract of X⊗ X ⊗X. The latter object lies in as ideals are two-sided.
The next proposition shows that the projective ideal is the smallest
non-zero ideal (see <cit.>). To give the precise statement, we need some notation.
We say that a tensor ideal is generated by P ∈, written
= P, if every object Q ∈ can be
obtained as a retract of P X for some X ∈. For example, the ideal
generated by is the entire category.
The ideals of ordered by inclusion of full subcategories form a poset, which immediately yields the notion of a sub-ideal.
Note that the intersection ∩[J] of two ideals (which is a well-defined operation on
replete subcategories) is a sub-ideal in and in [J].
Let be a finite tensor category. Then:
* = P for any non-zero projective object P.
* contains no right or left
sub-ideal other than zero and itself.
* If is a non-zero right or left
ideal in , then
is a sub-ideal of .
* admits a non-zero proper ideal if and only if is not semisimple.
It suffices to show (1) for projective covers of simple objects.
Note first that any projective indecomposable U is in the ideal
generated by .
Indeed, since is exact, 𝕀_U U → U is an epimorphism.
Thus there is an epimorphism from U onto
U, which splits by projectivity.
This shows U is a retract of U.
Conversely, ∈U for any U
∈, since
UU
U U
is a composition of epimorphisms (again by exactness of ).
This factors epimorphically through , and hence
is a retract of U𝕀_U.
Now (2) follows immediately from (1).
To see (3), let X ∈ and P ∈.
Since is a right ideal and is an ideal, we have X
P ∈∩.
But I ∩ is a sub-ideal of , so I ∩ must
either be (0) or by (2).
The claim thus follows if we establish that the tensor product of non-zero objects is
non-zero.
This is true because is exact:
since _X is a monomorphism, so is
_X 𝕀_P P →X X P, and hence
if X P ≅ 0, so is P.
Lastly, for (4) note that
is semisimple (*)∈
by (1) = () =
by (3) the smallest non-trivial ideal is itself ,
where the implication of (*) follows from X ≅ X for every X ∈ together with the fact that the ideal of projectives is replete.
Suppose that is a finite tensor category which is graded as an abelian
category by a set S, i.e. = ⊕_s ∈ S_s
such that there are no non-zero morphisms between objects in _s and _s' for s ≠ s'.
While an ideal is in general not an abelian subcategory, it is closed
under retracts, and thus inherits the grading in the sense that every object X
∈ can be written as ⊕_s∈ S X_s with X_s ∈_s.
Suppose now that _s is semisimple for some s ∈ S.
Then we have _s = _s, and if is an ideal with
_s non-zero, then the subcategory inclusion _s ⊂_s ⊂_s implied by
<Ref> shows that _s
= _s.
§.§ Modified traces on tensor ideals
Recall that a pivotal structure on a finite tensor category is a monoidal
natural isomorphism 𝕀_().
Equivalently, it is a monoidal natural isomorphism between the left and the right dual
functor.
Let from now on be pivotal.
For simplicity of exposition, we fix a left duality functor, and take as right dual of an object X the object X with (co)evaluations induced by the pivotal
structure, meaning that e.g. _X = _X∘ (_X𝕀_X).
Then one can define the right (quantum or categorical) trace of an
endomorphism f ∈_(X) as the number
[r, ]_X(f) =
_X∘
(f 𝕀_X)
∘_X
∈_()
.
We will also just write _X(f).
The right (quantum or categorical) dimension of X ∈ is
(X) = _X(𝕀_X).
One can similarly define the left categorical trace, and in general, the left and the
right categorical trace of a morphism do not agree.
If the left and the right categorical traces of all morphisms agree, then the
category is called spherical.
Later we will only deal with ribbon categories, and these are automatically spherical.
Let be a proper right ideal in the pivotal finite tensor category
.
Then the right categorical trace of vanishes identically on , that
is |_≡ 0.
Let X ∈.
The right trace of an endomorphism f of X is a morphism
_X(f) = → X X→ .
If this is non-zero, then is a retract of X X.
Since X ∈ and is a right ideal, also X X∈. But
ideals are closed under retracts, and so ∈.
Thus contains the ideal generated by , which is .
If the category is not semisimple, or equivalently if it admits a non-trivial proper ideal, then to obtain a version of trace and dimension which does not vanish on the ideal, the notion of a modified trace was introduced and studied in
<cit.>.
A right modified trace on a right ideal of is a family of linear maps
( _X _(X) →)_X ∈
satisfying the following two axioms:
* Cyclicity:
For every pair of morphisms f,g between objects of
_X ( X Y X )
= _Y ( Y X Y ) .
* right partial trace property:
for X ∈, V∈, and f ∈_(X V)
_XV (f)
=
_X
(
X
X V V
X V V
X
) .
Above, and also in many places below, we have omitted the -symbol for better readability.
The right partial trace property is simply the natural compatibility condition between
the right modified and categorical traces, and it establishes the `multiplicative'
property _XV (f g) = _V(g) _X(f).
Analogously one defines left modified traces on left ideals, where in (2) one uses the left partial trace.
On two-sided ideals, one can have both left and right modified traces;
a left modified trace on an ideal does not have to be right, but if it is, we will simply speak of a two-sided modified trace, or modified trace for
short.
Adding to <Ref>, if is ribbon, then a left
modified trace on a (two-sided) ideal is automatically right.
The proof can easily be adapted from
e.g. <cit.>.
A (left, or right, or two-sided) modified trace on an ideal
induces canonical pairings
(X, V) ×(V, X) →,
(f, g) ↦_X (g ∘ f)
for
X ∈, V ∈ ,
and one calls non-degenerate if all of these pairing are
non-degenerate.
* The categorical trace of serves as a modified trace on the ideal
=.
*
Any modified trace on is a scalar multiple of the categorical
trace :
from X = X,
together with cyclicity and the partial trace
property,
one finds _X(f) =
_(𝕀_) _X(f).
* If is unimodular, i.e. if ( , ) ≅, then admits a unique (up to scalar) non-degenerate modified
trace <cit.>.
Fix a non-zero ∈( , ). By non-degeneracy, we must have
_
(∘) ≠ 0, and we can normalise the modified trace so that _
(∘) =1.
The categorical trace is non-degenerate iff is semisimple.
By <Ref>,
vanishes on any proper ideal, so if is non-degenerate, the only possible proper
ideal of is (0), and we can conclude from <Ref> that
is semisimple.
Conversely, if is semisimple, then it is unimodular and =.
By points (2) and (3) in <Ref>, there exists a
non-degenerate modified trace on which is a scalar multiple of .
In addition to <Ref>, we observe that in the non-semisimple case is maximally degenerate in the following sense.
Let be a right ideal in and a right modified trace on . Define
Ann(,)
:= { X ∈ | _X(f) = 0 for all f ∈_(X) } .
Then Ann(,) is again a right ideal. Indeed, it is closed under retracts by cyclicity of and the partial trace property shows that for X ∈Ann(,) and V ∈ also X ⊗ V∈Ann(,). Since the categorical trace is non-zero on the tensor unit, Ann(,) is a proper right ideal, and by <Ref> every other right ideal is contained in it.
If is semisimple, we have Ann(,) = (0) by <Ref>.
If is not semisimple, we have the inclusions
(0)
⊊ ⊂ ⊂ Ann(,)
⊊
of right ideals,
for any proper and non-zero right ideal .
In <cit.>, the following `pullback construction' of tensor ideals and modified traces is introduced.
Recall that for a strong monoidal functor F [C] →[D] between rigid
categories, there is a canonical natural isomorphism ξ_X F(X) →(FX) (for an explicit expression see
(<ref>) in the appendix).
If [C] and [D] are pivotal, then we say that F is pivotal if we
have commuting diagrams
FX [r, "F ^[C]_X"]
[d, swap, "^[D]_FX"]
F(X)
[d, "ξ_X"]
(FX)[r, swap, "(ξ_X)"]
(F(X))
for all X ∈[C].
The next statement is a special case of the construction in <cit.> for submodule categories and module traces.
Let F →[D] be a strong monoidal functor between two finite
tensor categories, and let be a right ideal of [D].
* The pullback F^∗ of along F — i.e. the full
subcategory consisting of X ∈ such that F(X) ∈ — is a
right ideal of .
* Let the categories in addition be pivotal, and suppose that F is
pivotal.
If is a right modified trace on , then the pullback F^∗ of along F — i.e. the family of linear maps defined by
( F^∗)_X (f)
= _FX( Ff )
for X ∈ F^∗ — is a right modified trace on F^∗.
Since F [C] →[D] is a strong monoidal functor, A
X = A FX for X ∈[C], A ∈[D], defines a right
[C]-module structure on [D], and F is a right [C]-module functor
with respect to .
Note that the associators and unitors of are induced by the multiplication
and unit isomorphisms of F.
A right ideal ≤[D], being closed under tensor products from the
right, is a right submodule category which is also closed under retracts, and so
<cit.> implies that F^* is a right ideal.
Now let be a right modified trace on ≤[D], and let F preserve pivotal structures.
We claim that these conditions guarantee that is a [C]-module trace on
the [C]-module endocategory (, 𝕀) in the sense
of <cit.>.
To see this, note first that the canonical natural isomorphism ξ_X
F(X) →(FX) is given by the composition
F(X)
F(X) FX (FX)
F(X X) (FX)
F(FX)(FX) ,
where F_2 and F_0 are, respectively, the multiplication and the unit isomorphisms
of the strong monoidal functor F.
Then one inserts ξ_X ∘ξ_X into the right partial trace condition
satisfied by for any endomorphism of A X, and uses the
commutativity of the diagram (<ref>) to
conclude that is a module trace.
Thus the claim about pullback modified traces follows from the more general
construction in <cit.>.
Similar statements hold of course for all variations, e.g. the pullback of a left
modified trace on a left ideal.
Note also that for ribbon, the pullback of any modified trace is
automatically two-sided.
<Ref> will be our source of non-trivial intermediate ideals with modified trace. Namely, we consider F^∗⊂ for F : →[D] pivotal, [D] an appropriate unimodular tensor category, and the projective ideal in [D], see <Ref> for the application to link invariants and <Ref> for details in the case of quasi-Hopf algebras.
§.§ Invariants of C-coloured ribbon graphs and links
Invariants of C-coloured ribbon graphs and links
Let now in addition be ribbon, and let us denote by
the rigid ribbon category with
* objects:
Finite tuples
(V, ε) = ( (V_1, ε_1), …,
(V_m, ε_m)) with V_k ∈ and
ε_k∈{+,-}.
We denote by |V| = m the length of the tuple.
* morphisms:
Each (V, ε) ∈ determines in a
natural way a set of -coloured framed oriented points in [R]^2, say,
along the x-axis.
A morphism
T (V, ε) → (V', ε')
is then an isotopy class[
The isotopy can move the ribbon graph freely in the interior [R]^2 × (0,1), but keeps boundary points and framings fixed. E.g. coupons can be rotated and translated arbitrarily, dragging the attached ribbons along accordingly.
] of -coloured ribbon graphs in [R]^2 ×
[0,1] from (V, ε) ×{0} to
(V', ε') ×{1} such that framings,
orientations, and labels match — e.g. an incoming boundary vertex (V,+) means
that the corresponding strand is coloured with V and oriented upward.
The tensor product acts on objects by concatenation of lists, and the tensor unit of
is the empty tuple ∅.
We will draw morphisms in as blue -coloured ribbon
graphs so as not to confuse them with string diagrams in , for which we use
black.
The Reshetikhin-Turaev functor → is a ribbon functor which acts on objects as
( V, ε )
= ⊗_i = 1^|V|
V_i^ε_i where
V^+ = V
and
V^- = V
and on morphisms via evaluation of the corresponding string diagram, see <cit.> and <cit.>.
Fix now an ideal in and a modified trace on
.
A morphism in is closed
if it is an endomorphism of the tensor unit ∅ of ,
and a closed -coloured ribbon
graph T is -admissible if it has at least one edge coloured by an
object in .
If the ideal is clear from the context, we will sometimes just say
admissible.
Given an -admissible graph T with an edge coloured by X ∈,
a cutting presentation of T is a -coloured ribbon graph T_X ∈( (X,+), (X,+) ) such that
T = []_(X,+) (T_X)
.
Note that cutting presentations are not necessarily unique.
Nevertheless, the properties of modified traces allow for the following:
Let T be an -admissible -coloured ribbon graph.
Then the number
(T) := _X ( (T_X) )
for some cutting presentation T_X of T
is well-defined, i.e. an invariant of the isotopy class of T.
This is shown in <cit.>, see also <cit.> for a proof in the present setting.
As in <cit.> we call the renormalised invariant of
-admissible -coloured ribbon graphs.
If = and =, then [ ] =, i.e. the renormalised invariant based on the categorical trace is
just the standard Reshetikhin-Turaev invariant of closed -coloured ribbon
graphs.
Let L^M_X denote a closed -coloured ribbon graph containing a component which is a knot with colour M ∈. By this we mean that this component is an embedded ribbon loop which does not touch any coupons, but which is allowed to link with the rest of the ribbon graph. Furthermore, L^M_X contains an edge (distinct from the knot) coloured by X ∈.
For any exact sequence 0 → A → B → C → 0 in we have
( L^B_X )
=
( L^A_X ) + ( L^C_X )
.
Before proving this, let us state an obvious consequence:
intuitively, it tells us that given an admissible graph, any component that `looks like
a loop' need only be coloured by simple objects, as long as we keep one edge coloured
by something from the ideal on which our modified trace acts.
More precisely, we have the following corollary.
The renormalised Reshetikhin-Turaev invariant of L^B_X
depends only on the class of B in the Grothendieck ring of ,
that is
(L^B_X)
=
∑_U ∈BU· (L^U_X)
.
In we can without loss of generality represent L_X^B as
L^B_X
=
-0.4./img/proof_factor_through_GR_1.pdf0.6 (-50,000) L^B_X (-70,-20) B (003,000) X (015,000) ,
and cutting off the top cap and the bottom left cup, we obtain a
morphism
η(X)_B
=
(
-0.4./img/proof_factor_through_GR_2.pdf0.6 (-50,000) L^B_X )
∈( B B, X X )
after evaluating with the Reshetikhin-Turaev functor.
By our requirement on the B-coloured edge to be part of a knot, this actually defines a dinatural transformation
η(X) ∈(
, X X ).
If 0 → A → B → C → 0 is any short exact sequence, by
<cit.> we have
η(X)_B ∘_B
=
η(X)_A ∘_A + η(X)_C ∘_C
.
From monoidality of and linearity of the modified trace we therefore
get
_X
(
(
-0.4./img/proof_factor_through_GR_3.pdf0.6 (-30,000) L^B_X )
)
=
_X
(
(
-0.4./img/proof_factor_through_GR_3.pdf0.6 (-30,000) L^A_X )
)
+
_X
(
(
-0.4./img/proof_factor_through_GR_3.pdf0.6 (-30,000) L^C_X )
)
,
as claimed.
* <Ref> and its
<Ref> also hold for more general
types of -coloured graphs.
Indeed, looking at the proof, we see that it is enough to require the M-coloured
edge of L_X^M to be part of a natural transformation, i.e. natural in M.
*
The invariant (T) for an -admissible ribbon graph T does
not change if we add a -coloured loop O_ to T:
(T) = (T ⊔ O_).
For the ideal = every object is admissible, in particular the tensor
unit . For a -coloured link L we can write
(L) =
[](L) =
[](L ⊔ O_)
and apply <Ref> with X=.
This shows that the Reshetikhin-Turaev invariant depends on the objects
colouring L only up to their class in the Grothendieck ring.
§.§ Examples of link invariants for symplectic fermions
We will now give some examples of renormalised Reshetikhin-Turaev
invariants.
We consider the following families of -coloured links, where X, U ∈ (see
Figure <ref>):
* nX, n ∈[Z]:
the unknot with framing number n.
* mX, m ∈ 2[Z]+1: the (2, m)-torus knot.
This is the braid closure, i.e. the categorical trace in , of _X, X^m.
* X,aU,b, a,b ∈[Z]:
the Hopf link of X with U with framing a and b, respectively, i.e. the braid closure of (_X^ a⊗_U^ b) ∘_U, X∘_X, U in .
Note that ± 1X is isotopic to ± 1X, and that
3X is the X-coloured trefoil.
Replacing + with - in ± nX or ± mX turns
each knot into its mirror image.
By an appropriate rotation one sees that X,aU,b and U,bX,a are isotopic.
For each link L in the list above, we compute the invariant (L),
where we vary the input category , the tensor ideal with modified
trace , and the colours X,U of the link components.
The object X will always be in , while on U no such restriction is
imposed.
However, by <Ref> it is sufficient
to restrict oneself to simple U. We have the relations
(± 1X) = (± 1X) ,
(X,aU,b) = (U,bX,a) ,
(1X ⊗ U) = (X,1U,1) ,
where the last line follows from (_X⊗_U) ∘_U, X∘_X, U = _X ⊗ U.
Let us consider the ingredients , , and X in turn.
§.§.§ The finite ribbon category :
We take to be the category [] of finite-dimensional left
modules over = (N, β), the
symplectic fermion quasi-Hopf algebras from <cit.>.
The parameters N ∈[Z]_>0 and β∈[C] are subject to
β^4 = (-1)^N.
We review the definition of (N, β) in detail in <Ref>.
The value of β affects the ribbon quasi-Hopf structure of , but not the
algebra structure.
Independent of N, the algebra (N, β) has precisely four simple modules, denoted
X_0^± and X_1^±.
The simple objects X_1^± are projective, but X_0^± are not.
The length of the composition series of the projective covers
P_0^± of X_0^± is 2^2N, so in particular it grows with N.
The category [] is unimodular, i.e. the tensor unit X_0^+ is also the socle of P_0^+.
§.§.§ The tensor ideal and the modified trace :
We consider three different choices of tensor ideal in =[]:
* [] with the categorical trace .
* [[]] with the unique-up-to-scalar non-degenerate modified trace from <Ref> (3).
* AH with pullback modified trace [^A], where we take H = (2, β) and A ⊂ H is a pivotal unimodular quasi-Hopf subalgebra.
In point (3),
the intermediate ideal
=
AH is the pullback ideal (as in <Ref>) of
[[A]] along the restriction functor
and [^A] is the pullback of the unique-up-to-scalar modified trace on [[A]].
The full details can be found in <Ref>.
For N=2 the three ideals are properly contained in each other:
[[H]] ⊊ AH ⊊ [H] .
Corresponding examples exist for any N ≥ 2, see <Ref>.
§.§.§ The colouring objects:
Depending on the tensor ideal in question, the colouring objects are as follows:
* []:
By <Ref> (<ref>), colouring link components by
[[]] is sufficient for knowing the invariants for arbitrary colours in []. Hence in this case we restrict ourselves to the four simple objects X_0^±, X_1^± of [].
* [[]]:
We know the invariants for P ∈[[]] once we know the
invariants for all projective indecomposables, i.e. for the projective covers of
simple objects.
* AH:
We will consider a family
of objects P_∈AH indexed by ∈Mat_2([C]).
The modules P_ and P_' always have the same class in the Grothendieck ring of [],
but are isomorphic only if = ', see <Ref> for their
construction and structure.
?!width
^!width
format=hang
The link invariants obtained from these input data are listed in Table <ref>. The explicit computations of these invariants can be found in
<Ref>.
Let us collect some observations about the results in that table. We order the discussion by column, i.e. by the choice of tensor ideal.
* []:
The simple objects X_0^± generate a copy of the category of super-vector spaces in []. Since that category is symmetric, the invariant can only detect the parity of X_0^±, but none of the topology of the link. By <Ref>, any invariant involving a ribbon coloured by the simple objects X_1^± is zero, as these are projective.
* [[]]:
Consider the results for a fixed choice of the ribbon category, i.e. for fixed N and
β.
If N is odd, we see that for fixed choice of colour P_0^±, the invariant
can distinguish all the framed unknots nX and all the torus knots mX.
By contrast, for a semisimple ribbon category, these invariants would always be periodic[
This follows from the fact that in a (finitely) semisimple ribbon category, for each object X, the twist θ_X has finite order (and hence also the double braiding has finite order), see <cit.> – that theorem assumes modularity, but the part of the proof concerning the order of the twist does not use that assumption. Alternatively, one can specialise the statement for finite braided tensor categories in <cit.> to the semisimple case.
]
in n and m.
For even N and β = ± 1, can not distinguish the framed
unknot from its mirror knot, and ditto for the torus knots.
* AH:
This is the most interesting case.
First note that here N=2 and we just saw that for the choice β = ± 1, the
projective ideal could not tell apart a knot and its mirror for unknots
and torus knots.
For the intermediate ideal, these can now be distinguished even for β = ± 1.
More remarkably, for the intermediate ideal the modified trace can distinguish a
continuum of representations in AH.
For example, [[^A]] can detect the value
λ∈[C] for the choice
= [ λ 0; 0 1 ].
On the other hand, this example is somewhat special in the sense that link invariants reduce to the knot invariant of the component labelled by P_ times the quantum dimensions of the other labels due to a degeneracy explained in <Ref>.
There are a number of simple consistency checks for the expressions given in Table <ref>:
* The identities listed in (<ref>) are satisfied. For the last identity in (<ref>) on needs to know the tensor products. For example X_1^ν⊗ X_1^ρ≅ P_0^νρ, see <Ref>.
* In the middle column for the framed Hopf link, when both labels are projective, one can choose which one to cut and evaluate the modified trace on. Combining this with <Ref> and the identity P_0^ν = 2^2N - 1 ( X_0^+ + X_0^-) in the Grothendieck ring (see <Ref>) gives
(P_0^ν,aX_1^ρ,b)
= 2^2N-1(X_1^ρ,bX_0^+,a)
+ 2^2N-1(X_1^ρ,bX_0^-,a) ,
which is indeed the case in Table <ref>.
Assume that we have a family X_λ of mutually non-isomorphic objects in parametrised by λ∈[C], such that the Grothendieck classes X_λ are independent of λ.
Let L_λ be a link with one component coloured by X_λ and all other components (if there are any) coloured in an arbitrary but fixed way. Then:
* =: By <Ref> (<ref>) the invariant [](L_λ) depends on the colouring
only through the class in the Grothendieck ring, and is therefore independent of λ.
* =:
We first note that the X_λ cannot all be projective. Indeed, otherwise they would all be direct sums of finitely many projective indecomposables, and there are only finitely many choices with fixed Grothendieck class. Thus, for [](L_λ) to be defined, L_λ must have at least two components, one of which is coloured by X_λ, and one by a projective object. But then by <Ref> again only the Grothendieck class of X_λ is relevant and so [](L_λ) is independent of λ.
Thus in order to have a chance to see the parameter λ, we need the X_λ to all be in an ideal with ⊊⊊. Case (3) above illustrates that then it can indeed be possible to distinguish such a continuum of objects with fixed Grothendieck class.
§ INVARIANTS OF CLOSED 3-MANIFOLDS
In this section we review the construction of invariants of closed connected oriented
3-manifolds with so-called bichrome graphs inside of them <cit.>, and provide
the results of an example computation for lens spaces.
The construction takes as input datum a twist non-degenerate and unimodular finite ribbon
category.
§.§ Twist non-degeneracy, unimodularity, and factorisability
Recall the definition of ends and coends from e.g. <cit.>.
In a finite tensor category , the coend
= ∫^X ∈X X
with dinatural transformation_X X X →
exists (see <cit.> and <cit.>), and is a coalgebra in <cit.>.
Similarly, the end = ∫_X ∈ X X exists and is an algebra;
denote its universal dinatural transformation by
^ : → X X.
If is braided, then both and can be given the structure of a
Hopf algebra,
but we will only use the structure maps of :
mult.: → unit: →
comult.: → counit: → .
The dual carries the natural structure of the dual Hopf algebra, and
is in fact isomorphic to as a Hopf algebra.
There is also a Hopf pairing
→ ,
which means that the morphism → built using is a morphism of Hopf algebras.
See <cit.> for a detailed review using the present conventions.
A finite tensor category is called unimodular if the projective cover of the tensor unit is self-dual, or equivalently if the Hom-space → is one-dimensional (rather than zero-dimensional).
Let be a unimodular braided finite tensor category. Then admits a
two-sided integral with trivial object of integrals. This means there is a unique (up to scalar) non-zero morphism ∈( , ) satisfying
∘ (𝕀_) = ∘∘ (𝕀_) = ∘ ,
see <cit.> or <cit.> for a proof, and
e.g. <cit.> for more on integrals.
Dually to its integral, the Hopf algebra also admits a cointegral ∈( , ), determined up to scalar by
(𝕀_) ∘
= ∘
= (𝕀_) ∘ .
We can and will normalise such that
∘ =
𝕀_ .
Define the morphism 𝒬→ via the dinatural transformation
X X Y Y X X Y Y
.
In terms of 𝒬, the Hopf pairing is given by = (⊗) ∘𝒬. We define the morphism _ : → as
_ = (𝕀_) ∘𝒬∘ (𝕀_)
.
If _ is invertible, the unimodular braided finite tensor category is called factorisable. Equivalently, is factorisable iff in (<ref>) is non-degenerate, i.e. if the induced morphism of Hopf algebras is an
isomorphism ≅.
To see that these two conditions are indeed equivalent, one can use the identity _ = (⊗𝕀) ∘ (𝕀⊗) ∘ (𝕀⊗), which is immediate from the definitions, together with non-degeneracy of the copairing ∘
(see e.g. <cit.> for the latter).
Now let in addition be ribbon with ribbon twist .
Let _ be the unique endomorphism of such that
_∘_X = _X ∘ (
𝕀_X_X )
for all X ∈. _ is invertible, and
it determines the stabilisation coefficients ∈ of
via
∘^± 1_∘
=
·𝕀_ .
We call twist non-degenerate if the numbers are non-zero.
By postcomposition with _, _ we obtain linear endomorphisms of (, ), which will be useful to express the lens space invariants in <Ref>. For the explicit quasi-Hopf algebra computations it will be easier to have linear endomorphisms of (𝕀_) instead. To relate the two,
recall from e.g. <cit.> that for a unimodular finite tensor
category , the linear maps
(𝕀_)
(, )
(, )
,
given by
ψ(κ) ∘_X = _X ∘ (𝕀_Xκ_X)
ρ(g) = (g 𝕀_) ∘Δ_∘
are isomorphisms.
By pullback of _∘ (-), _∘ (-) along ρ∘ψ we get linear endomorphisms of (𝕀_), which we denote by _, _. Explicitly, for α∈(𝕀_),
ρ(ψ(_(α))) = _∘ρ(ψ(α)) ,
and similar for _.
A modular tensor category is a factorisable ribbon finite tensor category.
According to <cit.>,
a unimodular ribbon finite tensor category
is modular if and only if there
exists a non-zero number ∈^× such that the
integral and cointegral of are related via the Hopf pairing as
∘ ( 𝕀_ )
= · .
The number is called the modularity parameter.
It is further shown in <cit.> that =
holds in a modular tensor category.
Conversely, by <cit.> a modular tensor category is automatically
unimodular and twist non-degenerate.
Let be a modular tensor category.
* One can constrain the normalisation freedom for
by imposing
∘ () = 𝕀_ .
This fixes up to a sign, and together with ∘ =
𝕀_ implies = 1.
Note, however, that this is not the usual normalisation used for modular fusion
categories as in <cit.>, c.f. <cit.>.
* Recall from e.g. <cit.> that the modular
group acts projectively on (, ), the S- and T-generators
acting by composition with _ and _ as given in
(<ref>) and (<ref>).
Correspondingly, _ and _ provide a projective
-action on (𝕀_).
* For any V ∈, the internal character is the morphism χ_V =
_V ∘_V in (, )
<cit.>, and we denote by ϕ_V = (ρ∘ψ)(χ_V) the natural transformation corresponding to χ_V under the
isomorphism (<ref>).
For example, χ_ = is by definition the unit of the Hopf algebra
, and using the normalisation (<ref>) one finds
ρ()=. Thus ϕ_ = ψ().
As shown in e.g. <cit.>, the S-transformation _ acting on
ϕ_V yields the natural transformation
_(ϕ_V)_X
=
-0.5./img/openHopfLink.pdf1.0 (005,000) V (015,000) , (-10,-50) X
which is also called the open Hopf link operator <cit.>.
* By <cit.>, the non-degenerate modified trace on
can be chosen such that
_((ϕ_)_) = 1.
We remark that (ϕ_)_ factors through , see <cit.>.
§.§ Bichrome graphs and their invariants
We now recall the category of partially -coloured bichrome ribbon graphs from <cit.>.
This is the category whose objects are the same as those of
, see <Ref>, but morphisms
are now isotopy classes of bichrome graphs.
These are ribbon graphs which have
* two types of edges:
blue edges labelled by objects in ;
and red edges which carry no label
* two types of coupons:
blue coupons coloured by morphisms in , such that all incident edges
are blue and the labels of coupons and edges are compatible;
bichrome coupons, which are unlabelled and only come in the two types
-0.5./img/bichrome_coupon_end.pdf0.9 (-18,-22) -0.5./img/bichrome_coupon_coend.pdf0.9 (-18,016) .
It is clear that is a subcategory of
:
it has the same objects, and its morphisms are precisely the bichrome graphs which
are purely blue.
Moreover, there exists an extension → of the Reshetikhin-Turaev functor , called the
Lyubashenko-Reshetikhin-Turaev functor in <cit.>.
The subscript signifies that the integral is used in the definition of
:
what the functor does is, essentially, to colour the red parts of the graph by
.
We will now describe this functor algorithmically and in an example, more
details and proofs of the properties stated can all be found in <cit.>.
On objects, agrees with as given in (<ref>).
For its action on morphisms, consider as an example the bichrome graph
T =
-0.5./img/RLRT_functor_ex_1.pdf0.8 (-16,-48) (-16,040)
A cycle of T is a (maximal) subset of red edges in T that belong to the same
component in the red graph obtained by replacing the bichrome coupons by cups and caps,
as appropriate, and throwing away the blue parts.
Let n be the number of cycles in T, and let s and t be the source and target
object of (T), respectively.
In our running example: n = 2, s =, t =.
The morphism (T) ∈( s, t ) is obtained using the following recipe.
* For each cycle c of T, pick an edge, then `bend' it down and `cut' it
open so that the result is a
so-called n-bottom graph T.
In our example one obtains a 2-bottom graph:
T T =
-0.3./img/RLRT_functor_ex_2.pdf0.8 (-16,-28) (-16,060)
The new graph T has the property that when closing the cut red edges again with cups,
one recovers the original graph T.
* Change the colour of the red part of T to blue,
replacing a bichrome coupon by the appropriate universal dinatural transformation j or j^.
The formerly red part then gives rise to a transformation η_T
which is dinatural in each of the n pairs of consecutive red strands.
Namely (η_T)_X,Y = (T_X,Y), where
T_X,Y = -0.3./img/RLRT_functor_ex_3_2.1.pdf0.8 (-22,070) (-22,043) _Y (-25,-11) ^_Y (-97,-40) X (-77,-40) X (-65,-40) Y (-50,-40) Y (-23,-40) for all X, Y ∈.
* Use the universal property and the Fubini theorem for coends to obtain from
η_T
a morphism f_T∈( ^ n s, t):
-0.3./img/RLRT_functor_ex_3_1.pdf0.8 (-37,070) (-40,041) f_T (-73,-07) _X (-40,-07) _Y (-05,-40)
:=
-0.3./img/RLRT_functor_ex_3_2.pdf0.8 (-22,070) (-22,043) _Y (-25,-11) ^_Y (-23,-40)
* Finally, define (T) by inserting n copies of the integral,
(T) =
f_T∘ (^ n𝕀_s)
s → t
.
In <cit.> it is shown that this is a well-defined construction
which yields a ribbon functor.
Fix now an ideal in and a modified trace on
.
It is straightforward to generalise the notions of closed and admissible
-coloured ribbon graphs to bichrome graphs, and the concept of a (non-unique)
cutting presentation of an admissible bichrome graph also easily translates.
This leads to what is called the renormalised invariant of admissible closed
bichrome graphs in <cit.>:
Let T be an -admissible
closed bichrome graph.
Then the number
(T) := _X ( (T_X) )
for some cutting presentation T_X of T
is well-defined, and in particular an invariant of the isotopy class of T.
The proofs of <cit.> do not rely on being an
integral for , and so one could have likewise defined invariants (of
bichrome graphs) using any morphism in (, ).
However, morphisms that lead to well-defined manifold invariants have to satisfy extra conditions arising from the Kirby moves.
The integral satisfies these conditions, and more general such morphisms were investigated in <cit.> under the name Kirby elements.
The invariants and are both defined on purely blue closed admissible bichrome
graphs, but they are in general different.
To see this, let = []
and let be the unique non-degenerate modified trace on from
<Ref> (3), with normalisation _
(∘) = 1.[
In case is in addition modular, one can choose to satisfy (ϕ_)_ = ∘ <cit.>, and the normalisation of the modified trace then agrees with the one given in <Ref> (4).
]
Then, unless is semisimple, (T) is different from
(T):
the bichrome graph T given by the trace (in
) of ∘ satisfies
(T) = _(∘) = 1
and (T) = ∘ =
0.
More generally, we record the following
prop:LRTfunctor_on_admissibles_is_zero for later use,
which is a direct consequence of <Ref>:
Let T be a closed bichrome graph containing an edge coloured by an object in a proper tensor ideal. Then (T) = 0.
§.§ Definition and properties of the manifold invariant
Throughout, all manifolds will be oriented.
Let be twist non-degenerate and unimodular.
In order to define invariants of 3-manifolds with embedded -coloured bichrome
graphs, we first recall some notation following <cit.>.
For a framed link L in the 3-sphere S^3 we denote the manifold resulting from
surgery on L by M_L.
The number of components of the link is ℓ(L), and its signature is σ(L).
Let M be a closed connected 3-manifold, containing a closed bichrome graph T.
If M ≅ M_L, then we say that L is a surgery link for M;
we denote by T ⊂ M_L also the pullback of T along the homeomorphism, and by
L ∪ T ⊂ S^3 the corresponding bichrome graph in S^3, regarding L as red.
Since is twist non-degenerate, we can choose invertible numbers ,
∈ with
^2 = = / = / .
Let be a unimodular and twist non-degenerate finite ribbon
tensor category, and let
be an ideal of with modified trace .
If M ≅ M_L is a closed connected 3-manifold containing an -admissible
bichrome
ribbon graph T, then
(M,T)
= ^-1-ℓ(L)^-σ(L)( L ∪ T )
is a topological invariant of the pair (M,T).
Following <cit.> call the renormalised Lyubashenko invariant
of admissible closed 3-manifolds.
* For H a ribbon Hopf algebra, = the category of finite-dimensional
left H-modules, and the projective ideal in , the invariant
(M,T) was first considered in <cit.>, see <cit.> for the equivalence between <cit.> and <cit.> in the Hopf algebra case.
* The notation is chosen because of the dependence of the invariant on both
and .
In fact, it also implicitly depends on a choice of sign for .
For if we change →' = x for some x ∈^×,
then ' = x, and consequently ' = ± x.
If we impose the condition ' = x, i.e. '/' = /, then ' = and we have
L'_x , y + z '
= 1/x (y L'_, + z L'_, ')
for all x,y,z ∈[C] and modified traces and '.
The statement of <Ref> generalises to the renormalised Lyubashenko invariant, and for later reference let us restate it here in full:
Suppose the bichrome graph T^B_X in M has an edge labelled by X ∈ and a disjointly embedded knot coloured with B ∈ (which may link non-trivially with the remaining part of the graph T^B_X).
Then
(M, T^B_X)
= ∑_U ∈BU· (M, T^U_X)
.
Lyubashenko's original invariant <cit.>—or rather its extension allowing
for -coloured ribbon graphs inside the 3-manifolds—is the following special
case.
Let everything be as in <Ref>, but specialise to the ideal =.
By <Ref> (2), without loss of generality the modified
trace is the categorical trace .
We call
(M,T)
= (M,T)
the Lyubashenko invariant of the pair (M,T).
In case is semisimple, it is immediate from <cit.> that
coincides with the Reshetikhin-Turaev invariant
as presented for example in <cit.>.
For not semisimple, <Ref> implies that (M, T) vanishes whenever T has an edge coloured by
an object in a proper ideal.
From <Ref> we see that if T is a link (i.e. there are no coupons),
(M, T) factors through with respect to all components of T simultaneously.
The invariants from <Ref> are related to the Lyubashenko
invariant as in the following proposition, which is an immediate consequence of
<cit.>.[
There is a typo in <cit.>: the right hand side should have an extra factor of , i.e. the identity should read, in the present notation, (M # M',T')
= (M)
(M', T').
Also, <cit.> assumes that there is no ribbon graph in M, but the same proof works in the present situation where M contains T.
]
Let M be a closed connected 3-manifold, containing an admissible bichrome graph
T.
* Suppose T = T⊔ T_, where T_ is
admissible and contained within an open ball in M not intersecting
T.
Then
(M,T)
= (M, T)
(S^3, T_)
.
* Suppose that there is an admissible bichrome graph T_, normalised
such that (S^3, T_) = 1.
Then
(M,T)
= (M # S^3, T ⊔ T_)
.
Combining <Ref> (1) with <Ref> gives:
Suppose the bichrome graph T inside the 3-manifold M can be written as T_1
⊔ T_2, where T_1 contains an edge coloured by an object in any proper tensor ideal, and T_2 is admissible and
contained within an open ball in M that does not intersect T_1.
Then (M,T) = 0.
As another corollary to the above proposition, consider a situation where the ribbon
graph T in M can be written as T = T' ∪ T”, such that T' is disjoint from
T” but not necessarily contained in a 3-ball disjoint from T”.
Assume that T' is an embedded blue loop labelled by X ∈ with one coupon
f : X → X which factors through .
We have (M, T) = _X(f) (M, T”).
As f factors through , we can replace f by two coupons, one from X to and one from to X. This makes T' contractible, and we can disentangle it from T”. Replacing the two new coupons again by f, we end up with an unknot U_f with coupon f embedded in a 3-ball disjoint from T”.
One can now apply <Ref> (1) with T_ = U_f and T = T”.
We finish this section by showing that Lyubashenko's invariant detects semisimplicity
of in the sense that the invariant of S^2 × S^1 is non-zero if and only
if is semisimple and has non-vanishing global dimension.
Recall that the global dimension of a spherical fusion
category is
the sum of squares of quantum dimensions of the simple objects, i.e.
= ∑_U ∈ (_ U)^2
.
Over an algebraically closed field of characteristic 0, the global dimension
is automatically non-zero <cit.>.
Let be a unimodular and twist non-degenerate finite ribbon tensor category.
Then (S^2 × S^1, ∅) ≠ 0 if and only if is semisimple
with non-zero global dimension.
The surgery presentation of S^2 × S^1 is a zero-framed unknot. From this one easily computes
(S^2× S^1, ∅)
= ∘ ,
see e.g. <cit.>.
By <cit.>, is a cointegral (in the sense of
<cit.>) for the Hopf monad given by tensoring with .
We denote this monad by T.
The Maschke Theorem for Hopf monads <cit.> says that T is
semisimple iff ∘≠ 0.
In our setting, semisimplicity of T is equivalent to semisimplicity of the
category of modules of T, which is known to be isomorphic to the Drinfeld centre
[Z]() of <cit.>.
This means that ∘≠ 0 iff [Z]() is semisimple,
and therefore our desired result follows from the next
prop:c_FuCat_globDim_iff_center_ssi.
For a ribbon finite tensor category , the following are equivalent:
* is semisimple and ≠ 0.
* The centre [Z]() is semisimple.
(1)⇒ (2) is shown in <cit.>.
For (2)⇒ (1), we first show that is semisimple.
Since is braided, it embeds into [Z]() via a fully faithful monoidal
functor.
Explicitly, the embedding sends objects X ∈
to (X, _X, -) with the braiding in .
If [Z]() is semisimple, therefore, is equivalent to a
full (replete) abelian subcategory of a semisimple abelian category and hence itself
semisimple abelian.
In particular, is a spherical fusion category.
This, together with the assumption that [Z]() is semisimple, implies that ≠ 0 by <cit.>.
Suppose that is non-semisimple.
As a consequence of
<Ref> we get that
the Lyubashenko invariant of closed 3-manifolds can not be extended to a (rigid) TFT, see e.g. the introduction of <cit.>.
The reason is that in a rigid TFT, the invariant of S^2 × S^1 gives the dimension of the state space of S^2. If this is zero, the TFT functor is zero on all non-empty bordisms as one can always write such a bordism as a composition with a 3-ball, seen as a bordism ∅→ S^2. In particular, it would be zero on S^3, but by construction (S^3, ∅) = ^-1≠ 0.
§.§ Example: Lens spaces
For this section let us fix two coprime positive integers p,q.
The lens space 𝔏(p,q) is defined as the quotient of S^3 = { (z,w) ∈[C]^2 | |z|^2 + |w|^2 = 1 } by the free [Z]/p[Z] action
k.(z,w) = ( e^2π i k/p z , e^2π i k q/p w ) .
Since S^3 is simply connected, it follows
that the fundamental group of 𝔏(p,q) is [Z]/p[Z], and so the parameter p is an invariant of 𝔏(p,q). The parameter q on the other hand is not, as the presentation above shows 𝔏(p,q) = 𝔏(p,q+p).
See e.g. <cit.> for more on lens spaces.
A surgery presentation of 𝔏(p,q) can be given as follows, see e.g. <cit.> or <cit.>.[
One obtains an isotopic surgery link if one exchanges over- and underbraidings, so
the surgery descriptions of 𝔏(p, q) in <cit.> and
<cit.> are the same.
Similarly, one can reverse the order of a_1,…,a_n.
]
Fix a (generalised) continued fraction expression
p/q
= +1.0cma_n - 1a_n-1 - 1⋱ - 1a_2 - 1a_1
=: [a_n; a_n-1, …, a_1]
, a_i ∈[Z]
.
Then we have the surgery presentation
𝔏(p, q) =
-0.4./img/lens_space_no_loop.pdf0.7 (-130,-30) a_1 (-100,-30) a_2 (-060,-30) a_n-1 (-030,-30) a_n
in S^3.
From now on, we will require the continued fraction expression to satisfy the following
Assumption: p_iq_i := [a_i; a_i-1,
…, a_1] > 0 for i=1,…,n .
This assumption can always be satisfied. For example, choose a_n to be the ceiling of p/q (smallest integer ≥p/q). Then in the next step one starts from (a_n-p/q)^-1≥ 0. If this is zero, one is done, else take a_n-1 to be its ceiling, and so on.
We will show in the appendix (<Ref>) that under the above assumption, the signature of the surgery link in (<ref>) is
σ = σ(𝔏(p, q)) = n .
With these data fixed, we will consider two types of admissible bichrome graphs
embedded in the lens space 𝔏(p, q), corresponding to whether or not the
graph is contained within a ball.
More precisely, let α∈(𝕀_) and P ∈.
Define the bichrome graphs
𝔏_*(α, P)
= -0.4./img/lens_space_no_loop.pdf0.7 (-130,-30) a_1 (-100,-30) a_2 (-060,-30) a_n-1 (-030,-30) a_n -0.4./img/lens_space_loop_not_wrapped.pdf0.9 (-35,002) α_P
and
𝔏_∘(α, P)
= -0.4./img/lens_space_loop_wrapped.pdf0.9 (-180,-35) a_1 (-150,-35) a_2 (-090,-35) a_n-1 (-050,-35) a_n (-35,005) α_P
in S^3.
Then we consider as manifold the lens space with embedded blue ribbon graph obtained by
surgery on the red part of the respective link, and denote it by the same symbol as the
link.
Note that in 𝔏_*(α, P), the blue graph is embedded along a
contractible cycle, and that (𝔏_*(α, T)) = (𝔏_∘(α, T)) for any transparent object T.
Let be unimodular and twist-nondegenerate, and recall
the definition of _, _ : → from (<ref>) and (<ref>).
To state the resulting invariants of 𝔏_*/∘(α, P), let us define
the morphism
f(a) =
(
∏_i=n^1
_^a_i∘_) ()
=
(
_^a_n∘_∘…∘_^a_1∘_) ()
∈ (, )
,
where we abbreviate a = (a_1, …, a_n).
We also need the natural transformation
Q 𝕀_𝕀_ with components determined by the dinatural transformation
X X V
X XV
V
for V∈.
Let be unimodular and twist-nondegenerate,
⊂ a tensor ideal with modified trace , P ∈ and α∈(𝕀_).
We have
(𝔏_*(α, P))
= ^-σ^-1 -n·_P(α_P)
·∘ f(a) ,
(𝔏_∘(α, P))
=
^-σ^-1 -n·_P(α_P ∘ Q_P ∘ (f(a) 𝕀_P))
.
Let us start with 𝔏_∘(α, P).
To compute ,
by the procedure in <Ref>
we need to evaluate the
n-dinatural transformation
-0.5./img/lens_space_proof_1.pdf1.5 (-137,-20) a_1 (-075,-20) a_2 (-025,-20) a_n (-005,-42) P
and then compose with ^ n𝕀_P.
Here we have disregarded the coupon α_P, as we may just reinsert it on top
once we are done.
Now one finds that this dinatural transformation corresponds to the morphism
-0.4./img/lens_space_proof_2.pdf1.5^ n P → P
(-250,-15) ^a_1 (-235,020) 𝒬 (-189,-15) ^a_n-1 (-168,020) 𝒬 (-125,-15) ^a_n (-110,030) Q_P ,
where 𝒬 was defined in (<ref>).
From the explicit expression for _ in (<ref>) it is easy to see that
_∘ =.
Thus, precomposing the above morphism with ^ n𝕀_P and replacing the first factor of Λ with _∘
yields Q_P ∘ (f(a) 𝕀_P).
This establishes the expression for (𝔏_∘(α, P)).
In order to obtain (𝔏_*(α, P)) one simply repeats the
above computation without the double braiding between and P in
(<ref>).
This amounts to replacing Q_P in (<ref>) by
⊗𝕀_P.
By <Ref> we have
(𝔏_*(α, P))
= _P(α_P) ·(𝔏(p, q))
.
Comparing to <Ref>, we read off
(𝔏(p, q))
= ^-σ^-1 -n·∘ f(a)
,
in agreement with
<cit.>,
where this invariant was first computed. Strictly speaking, we still need to show that _P(α_P) ≠ 0 for some choice of , P and α. This is the case e.g. for =, P = and any α with α_ = 𝕀_, or for =, P = and the unique α∈(𝕀_) which sends the top of to its socle, and which is zero on the projective covers of the other simples.
We end this section by computing the invariants in <Ref> in the example = [], the representations of the symplectic fermion
quasi-Hopf algebra = (N, β) already mentioned in <Ref>, and which will be described in detail in <Ref>.
In this example, is modular (and hence twist-nondegenerate, see the sentence before <Ref>),
and we use the Lyubashenko integral with normalisation (<ref>);
this also fixes a canonical
normalisation for the modified trace,
see <Ref> (4).
§.§.§ Categorical trace
Let us start with the case where the ideal is = and we use the categorical trace . By <Ref>, in this case we get the original Lyubashenko invariant. We show in <Ref> that
(𝔏(p, q))
=
p^N
.
Including blue ribbons as in 𝔏_*/∘(α, X) does not add anything new. Indeed, by <Ref> it is enough to consider X ∈ simple. There are four simple objects in [], namely X_0^± and X_1^±. Of these, X_1^± are projective and so the invariant is zero by <Ref> (where we choose T_2 = ∅). The object X_0^+ is the tensor unit, and since the Grothendieck class of its projective cover is [P_0^+] = 2^2N-1 ( [X_0^+]+[X_0^-] ) (see <Ref>), the invariant of X_0^- must be minus that of X_0^+. Altogether, for x = *,∘,
(𝔏_x(𝕀, X_0^±)) = ± p^N
, (𝔏_x(𝕀, X_1^±))= 0
.
In particular, cannot detect whether or not the blue ribbon wraps a nontrivial cycle in 𝔏(p, q).
The result (<ref>) suggests that for symplectic fermions, the Lyubashenko invariant of a 3-manifold is just a classical invariant, namely
the order of the 1^st homology group H_1(M) of the manifold M, in power of the “rank” of the category, N in our case.
We conjecture that for all closed 3-manifolds M,
(M) = 0 ; H_1(M) infinite
|H_1(M)|^N ; else
This fits into a series of results and conjectures on the relation between non-semisimple and semisimple invariants. The first instance was proven in <cit.>: the Hennings invariant for small quantum sl(2) for an odd order root of unity is proportional to the SO(3) Chern-Simons invariant at the corresponding level. The proportionality coefficient is |H_1(M)| if it is finite, and zero else. Related results and conjectures have been given in <cit.>.
While similar, our case of = [] is not directly covered by those previous observations. Indeed, it is the first higher rank example for manifolds M which are not integer homology spheres (i.e. for which |H^1(M)|>1).
The relevant semisimplification[
Let '_ss be the semisimplification obtained by dividing out negligible morphisms, see <cit.>, <cit.> and <cit.>. The projection →'_ss is a ribbon functor. In '_ss, indecomposable objects of non-zero dimension become simple objects <cit.>. This may lead to an uncountably infinite number of simples (this happens e.g. for [] with N ≥ 2), and we take _ss⊂'_ss to be the full subcategory tensor-generated by the images of the simple objects in under the projection.
]
of = [] is _ss = 𝖲𝖵𝖾𝖼𝗍. This category is not modular but still twist-nondegenerate. Hence it defines Reshetikhin-Turaev invariants
RT__ss(M)
of closed three-manifolds M.
With appropriate overall normalisation, for _ss = 𝖲𝖵𝖾𝖼𝗍 these are all equal to one.
Motivated by these examples,
it would be very interesting to see if the following general statement holds or not:
Let be a modular tensor category. Then its semisimplification _ss is twist-nondegenerate and the corresponding invariants are related as (M) = |H_1(M)|^n RT__ss(M)
for some positive integer number n if |H_1(M)| is finite. Otherwise (M) is zero.
§.§.§ Modified trace on projective ideal
Next we turn to the projective ideal =
[[]]. There is no need to consider 𝔏_*(α, P) as by (<ref>) the corresponding invariant is given by (𝔏(p, q)) times a topology-independent prefactor.
Recall from <Ref> that P_0^± denotes the projective cover of X_0^±.
In the next theorem, by |t| for t ∈[Z]_2^N we mean the sum ∑_j=0^N t_j.
We have
(𝔏_∘(𝕀, P_0^±))
= ±12( c^0 + β^2 q^N)
,
where c^0 ∈{0}∪{β^m | m∈[Z] }.
Furthermore, for t ∈[Z]_2^N with |t| ≥ 1, there are α_t ∈(𝕀_)
(independent of p,q)
such that
(𝔏_∘(α_t, P_0^±))
=
± (-1)^|t|β^2
2^- |t| - 1
p^|t|
q^N - |t| .
The arduous proof will be presented in <Ref> below, which in fact contains some additional invariants to those presented here.
We see that (𝔏_∘(𝕀, P_0^±)) and (𝔏_∘(α_t, P_0^±)) (for N ≥ 2 and 1 ≤ |t| ≤ N ) are not invariant
under q q+p, and so are not invariants of the lens space 𝔏(p,q) but only of the lens space with
the embedded ribbon loop.
§.§.§ Pullback trace on intermediate ideal
Finally, we also consider the intermediate ideal
AH with pullback modified trace [^A],
as in the computation of link
invariants in <Ref>.
Let H = (2, β), and consider the Mat_2([C])-indexed
H-modules P_ already described in <Ref>.
Then
[^A]
(𝔏_∘(𝕀, P_))
= - 2
p q (1 + )
,
and there are α_j,l∈(𝕀_) (independent of p,q) with j, l = 1,2 such that
[^A] (𝔏_∘(
α_j, l, P_))
=
p^2
_j, l .
The above theorem is part of the more general statement in <Ref>
where also explicit expressions for the α_j, l are given.
Note that it is possible to recover the matrix M uniquely by computing all four
values of [^A] (𝔏_∘(
α_j, l, P_)),
thereby distinguishing all modules P_.
This ends the first part of the paper, in which we did set out general properties of the non-semisimple link- and three-manifold invariants, and where we presented some explicit results in the example of the symplectic fermion quasi-Hopf algebra.
In the second part of the paper we review the necessary background on quasi-Hopf algebras and provide the technical computations of the various invariants in the symplectic fermion example.
§ QUASI-HOPF ALGEBRAS, TENSOR IDEALS AND MODIFIED TRACES
This sec:qHopf contains our conventions for quasi-Hopf
algebras, as well as background material on pivotal structures, braidings, and ribbon structures. We discuss modified traces and explain in detail how to define modified traces on tensor ideals described by quasi-Hopf subalgebras.
§.§ Quasi-Hopf algebras and their modules
Our conventions for ribbon quasi-Hopf algebras follow <cit.>.
§.§.§ Quasi-Hopf algebras
An algebra H is a quasi-Hopf algebra if it comes equipped with elements ,
∈ H, ∈ H^ 3, and linear maps Δ: H → H
H, : H →, and S:H → H satisfying the axioms described in this
subsection.
We denote the unit of H by , or also by _H if it can be confused with the tensor unit.
Throughout <Ref> we will assume that H is a finite-dimensional k-vector space.
We employ the sumless Sweedler notation to write
Δ(h) = h1 h2
(Δ𝕀) (Δ(h))
= h1,1 h1,2 h2 ,
etc., for h ∈ H.
Similarly, for w∈
H^ 2 we write w = w_1 w_2,
again with implicit summation,
and w_21 = τ(w), where τ is the flip map in vector spaces.
This notation is extended to higher tensor powers in the obvious way.
The coproduct Δ and the counit are required to be algebra maps.
Furthermore, the counit makes the coproduct counital.
In Sweedler notation, these conditions read
(hg)1 (hg)2
= h1 g1 h2 g2, (hg) =
(h)(g) and ( h1 ) h2 = h = h1( h2 ) for h,g ∈ H.
The coassociator is invertible and we write =.
Further, it is normalised, i.e. (𝕀𝕀)() =, it makes the coproduct coassociative in the sense that
(Δ𝕀) (Δ(h)) · = · (𝕀Δ )
(Δ(h))
for all h∈ H, and it satisfies a cocycle condition
(Δ⊗𝕀⊗𝕀)()
· (𝕀⊗𝕀⊗Δ)()
= (⊗)
· (𝕀⊗Δ⊗𝕀)()
· (⊗) .
Note that our conventions for differ from those of e.g. <cit.> in that we use the inverse coassociator.
Finally, the antipode S is an anti-algebra map, and together with the evaluation and
coevaluation elements and it satisfies
S(h1) h2
= (h)
,
h1 S(h2)
= (h) ,
h∈ H,
and
S([1]_1) [1]_2 S([1]_3) =
, [1]_1 S([1]_2) [1]_3 = .
From the last two axioms it is easy to see that one may always rescale and
such that () = 1 = ().
We will also use the (co)opposite (co)multiplications μ = μ∘τ and
Δ = τ∘Δ, where τ is again the flip map in .
§.§.§ Finite tensor category structure of H-modules
A left H-module is a left module V over the algebra H, and the action of h ∈
H on v ∈ V is denoted by h.v.
We write for the category of finite-dimensional left H-modules.
It is a (non-strict) finite tensor category:
the monoidal product of two modules V, W is the vector space V W with
the diagonal action h.(v w) = h1.v h2.w, and
the associator of this monoidal structure is
U (V W) → (U V) W,
u v w ↦[1]_1 . u [1]_2 . v [1]_3 . w
.
Next we describe a rigid structure on .
To this end, denote by
V^* × V →,
(f,v) ↦⟨ f | v ⟩ = f(v)
the canonical pairing between a vector space V and its linear dual V^*.
An H-module V has both a left and a right dual.
They are given by the dual vector space V^*, and h∈ H acts on the
left dual V resp. the right dual V by
(h.f)(v) = ⟨ f | S(h) . v ⟩resp.
(h.f)(v) = ⟨ f | S(h) . v ⟩ ,
for v∈ V, f∈ V^*.
The corresponding evaluation is given by[
The notation for the (co)evaluation morphisms given here differs from <Ref>, as we will later fix a convention adapted to a given pivotal structure, see (<ref>) and (<ref>) below.
]
^L_V(f v)
=
⟨ f | . v ⟩resp.^R_V(v f)
= ⟨ f | S() . v ⟩ ,
and the coevaluation by
^L_V(1) = ∑_i=1^ V . v_i v^i
resp.^R_V(1) = ∑_i=1^ V v^i S() . v_i
,
where {v_i} is a basis of V with corresponding dual basis {v^i}.
That these four maps are indeed morphisms in the category is guaranteed by
(<ref>), and the zig-zag axioms for follow from the zig-zag
axioms (<ref>) for H.
§.§.§ Special elements and (co)integrals
A left integral for a quasi-Hopf algebra is an element c ∈ H satisfying hc
= (h) c for all h ∈ H.
In other words, c ∈(, H), regarding H as the left regular
representation.
One similarly defines right integrals.
It is well-known that, in the finite-dimensional case, left (right) integrals exist and
span a one-dimensional space.
A priori, left and right integrals are different;
by dimension considerations, if c is a left integral, then ch = (h) c for
all h ∈ H, where ∈ H^* is an algebra morphism called the
modulus of H.
Then a left integral is a right integral if and only if =, in which
case one says that H is unimodular.
The modulus may be identified with the one-dimensional socle of
<cit.>.
Thus H is unimodular if and only if is unimodular.
The dual notion of left/right cointegrals for H is more involved
<cit.>, since the (linear) dual of a quasi-Hopf algebra is not a quasi-Hopf
algebra in general.
Following <cit.>, define the elements
= [1]_1 S([1]_3) [1]_2,
= [1]_1 [1]_2 S([1]_3),
= S([1]_1)[1]_2 [1]_3,
= [1]_2 S([1]_1 ) [1]_3
,
which satisfy a number of useful identities, see e.g. <cit.> for a
list in our conventions and <cit.> for a representation in graphical
calculus.
With these, one may describe the Drinfeld twist of H, i.e. the
invertible element in H H implementing the natural isomorphism WV≅(V W) in , see
e.g. <cit.> for an explicit expression.
With the application to link and three-manifold invariants in mind, we will in the
following restrict ourselves to the case that H is unimodular.
In that case, a linear form ^r on H is a right cointegral, resp. ^l
a left cointegral, if and only if it satisfies
(^r 𝕀) ( VΔ(h) U) = ^r(h) resp.
(𝕀^l) ( VΔ(h) U ) = ^l(h)
for all h ∈ H, see <cit.> and <cit.> for a
definition in our conventions.
The elements U^, V^∈ H
H are defined as
U = (S S)(_21),
V = (SS)(_21 _21),
U = (SS)(_21 _21),
V = (S S)(_21) _21
.
§.§.§ Pivotal structure, braiding, and ribbon structure on
Here, we briefly review results on pivotal structures, braidings and ribbon twists. For more details we refer to <cit.> for the categorical structures, and to <cit.> for their quasi-Hopf algebra counterparts.
Recall that a pivotal structure on a rigid monoidal category is a monoidal
natural isomorphism 𝕀_().
A pivot for H is an invertible element ∈ H satisfying
S^2(h) = h
and
Δ() = · (S S)(_21) · ().
A pivotal quasi-Hopf algebra is a quasi-Hopf algebra together with chosen pivot.
There is a bijective correspondence
{pivotal structures
on }{pivots
for H
}
, ⟼
(^_H)(_H())
,
where ^_V is the canonical isomorphism V ≅ V^** of
vector spaces.
The inverse maps to the pivotal structure _V(v) = ^_V( . v).
A pivot satisfies () = 1 and S() =.
In the pivotal case we will choose the (co)evaluation maps in such that they are related by the pivotal structure. Our convention is to define the two-sided dual to be V. Accordingly we take the left (co)evaluation map from (<ref>), (<ref>) and we write
_V := ^L_V
, _V := ^L_V .
The right-duality on V is given by _V := ^L_V∘ (_V ⊗𝕀_V) and similar for _V. Explicitly,
_V(v f)
= ⟨ f | S() . v ⟩ , _V(1) = ∑_i=1^ V v^i S() . v_i
.
An R-matrix for H is an invertible element ∈ H H
satisfying
(Δ𝕀) ()
= _231_13_132_23,
(𝕀Δ) ()
= _312_13_213 R_12
and Δ(h) = Δ(h).
A quasi-triangular quasi-Hopf algebra is a quasi-Hopf algebra together with a
chosen R-matrix.
There is a bijective correspondence
{braidings
on }{
R-matrices
for H
}
, ⟼τ∘_H, H()
where again τ is the flip map in .
The inverse maps to the braiding
_X, Y(x y)
= _2 . y _2 . x.
Associated to an R-matrix is also its monodromy = _21, which encodes the double braiding in .
Lastly, since is rigid, there is a canonical (but in general non-monoidal)
natural isomorphism 𝕀_() built using left
dualities and a braiding;
it is completely determined by an element ∈ H, called the
Drinfeld element of H, see e.g. <cit.> for an explicit
expression in the current conventions.
Finally, recall that a ribbon twist on a braided rigid monoidal category is a
natural automorphism of the identity functor, satisfying certain axioms.
A ribbon element for H is a non-zero central element ∈ H
satisfying
Δ() = ·
and S() =.
A ribbon quasi-Hopf algebra is a quasi-Hopf algebra together with a chosen
ribbon element.
A ribbon element is automatically invertible and satisfies () = 1,
and there is a bijective correspondence
{ribbon twists
on }{ribbon elements
for H
}
, ↦ (_H)() .
The inverse maps to the ribbon twist _V(v) = . v.
Since a ribbon category is pivotal, a ribbon Hopf algebra admits a canonical pivot
which is compatible with the ribbon structure.
In terms of the ribbon and Drinfeld element, it is explicitly given by =.
§.§ Modified traces on hM and
Proj(hM)
Let (H, ) be a pivotal quasi-Hopf algebra.
§.§.§ The categorical trace
From (<ref>) and (<ref>) one finds that the right
resp. left categorical trace of = is given by
[r, ]_V(f) = _V ( S() ∘ f )
resp.[l, ]_V(f)
= _V ( S() ∘ f )
for f ∈_H(V), where by h = S() we mean the linear
endomorphism of V given by acting with h, and on the right hand sides is the
usual trace of linear maps.
By definition, H is called spherical if both categorical traces agree. This is
in particular the case if H is ribbon.
§.§.§ The modified trace on the projective ideal
Next, consider = [], the subcategory of projective
H-modules.
Since non-degenerate modified traces do not exist in the non-unimodular
setting <cit.>, we now restrict to the case that H is unimodular.
Building on <cit.>, in <cit.> it was shown that non-degenerate right (resp.left) modified traces on are in bijection with linear forms
∈ H^* (resp. ∈ H^*) satisfying
(𝕀) ( Δ(h) ) = (h)
resp.
(𝕀) ( Δ(h) ) = (h)
for all h ∈ H.
Moreover, the unique (up to scalar)
solutions to these equations are given by
(h) = ^r ( h )
resp.(h) = ^l ( h )
, h ∈ H
,
where ^r (resp. ^l) is a non-zero right (resp. left) cointegral of
H defined via (<ref>).
Such a solution is then called a right (resp. left) symmetrised cointegral for
H, the name reflecting the fact that it is
a symmetric form associated to a
cointegral.
Let us describe explicitly the right modified trace associated to the right symmetrised
cointegral .
By linearity of the modified trace it is enough to consider endomorphisms of
indecomposable projective modules, or more generally, any direct summand of the regular H-module H.
So let P ∈ be a direct summand in H via the split idempotent H
P H.
The associated right modified trace of f ∈_H(P) is given by
^r_P(f)
= ((i ∘ f ∘ p)())
= ^r(· (i ∘ f ∘ p)())
,
and is independent of the choice of i and p with p ∘ i = 𝕀_P, see <cit.>.
Replacing by the left symmetrised cointegral
yields the left modified trace.
Note that if H is ribbon, then there is no difference between left and right modified
traces, and so the left and right symmetrised cointegrals agree.
The corresponding left and right cointegrals do not have to agree (but they do e.g. if
has order 2).
A special case of (<ref>), which will often appear
later, is when f is a component of a natural transformation ξ∈(𝕀_).
It is well-known that (𝕀_) is
isomorphic to the centre
Z(H) of H;
in fact, ξ is given by the action with ξ̂ = ξ_H() ∈ Z(H).
Then we have ^r_P(ξ_P) = ^r(ξ̂ e) where e = (i ∘
p)() is the idempotent of the direct summand P in H.
§.§ Lyubashenko coend and integral
Let H be a quasi-triangular quasi-Hopf algebra, so that is braided.
The Hopf algebra (as recalled in <Ref>) exists
and is given by the coadjoint representation — i.e. ≅ H^* as vector spaces,
with action given by (h.f)(v) = f(S(h1) v h2) for f ∈ H^* and
h,v ∈ H — together with the universal dinatural transformation with components
_X(x^* y)(h) = x^*(h y)
for X ∈, x^* ∈X, y ∈ X, and h ∈ H.
See <cit.> for more details.
Let now H be unimodular.
This is equivalent to being unimodular, and so admits a two-sided integral : →, which in the present case translates into ∈ H^*.
It was shown in <cit.> that given a non-zero right cointegral ^r for H,
a non-zero integral for
is obtained as (h) = ^r(S() h).
This is in fact a bijection with inverse
^r(h) = (S(_1) h _2) .
The condition (<ref>) for twist-nongedeneracy of can be expressed in terms of quasi-Hopf data as follows. By <cit.>, the endomorphism _ : → from (<ref>) acts on f ∈ = H^* as (_(f))(a) = f( a), where a ∈ H. The counit ε_ : → is given by ε_(f) = f() for f ∈ H^*, see <cit.>. Combining this with the above expression for the integral , we obtain
twist-nondegenerate ⇔ =
^r(S() ^∓1) ≠ 0 .
We call quasi-Hopf algebras satisfying the non-vanishing condition on the right hand side of (<ref>) twist-nondegenerate.
Recall from <Ref> the condition for a unimodular braided finite tensor category to be factorisable. We call the unimodular quasi-triangular quasi-Hopf algebra H factorisable if is factorisable. A characterisation in terms of quasi-Hopf data was given in <cit.> and agrees with the original definition in <cit.> as shown in <cit.>.
Next we explain that two a priori different looking ways of normalising the modified trace on the projective ideal in terms of the integral actually agree.
We start by fixing an arbitrary non-zero integral : →. In case H is modular, one can further fix up to a sign by demanding <Ref> (1), but here we do not assume this.
From (<ref>) and (<ref>), one obtains a (two-sided) symmetrised cointegral by setting
(h) = (S(_1) h _2). Via (<ref>) this in turn fixes a modified trace ^Hopf on the projective ideal. Altogether, we have determined ^Hopf from the initial choice of .
This Hopf algebraic normalisation condition has a categorical counterpart:
Assume for this remark that is modular.
Given , there is a unique cointegral for such that ∘ = 𝕀_, cf. (<ref>).
Fix a surjection (unique up to scalar)
→.
By <cit.>, this yields a unique morphism → such that
∘_
= _∘( 𝕀_ (∘))
.
Note that the composition ∘ is independent of the choice of . From <cit.> it follows that for a non-zero modified trace on one has _(∘) ≠ 0. We may thus normalise such that
_(∘) = 1 .
This is the same condition as in <Ref> (4), again by <cit.>. Altogether, starting from we obtained a unique choice of modified trace on .
Let us return to the unimodular quasi-triangular quasi-Hopf algebra H. We have:
^Hopf satisfies (<ref>).
Note that this holds without assuming that is modular.
Denote by i and π
a choice of morphisms in which give
the injection and projection of in
H. The map i is unique up to scalar as is also
an injective hull of
and the space of H-intertwiners from to _HH is the space of left integrals and hence one-dimensional.
Fix the
morphism = ∘ i → and let be determined by (<ref>). To describe more explicitly, first note that by <cit.> there is a unique choice of integral c ∈ H of H such that the cointegral ∈ H^** of satisfies = ⟨ - | c ⟩.
A short calculation using the normalisation () = 1
shows that
= π(c)
(viewing π(c) ∈ as a linear map k →)
satisfies (<ref>).
Note further that (c) = (c) = ∘ = 1.
Altogether we have (here we write _H ∈ H for the unit of H to distinguish it from the tensor unit of )
^Hopf_
(∘)
=
(
(i ∘∘∘π)
(_H)
)
=
(e_)
(c)
= 1 ,
where e_ = (i ∘π)(_H).
The last equality follows from the fact that the counit is only non-zero on the
idempotent corresponding to the tensor unit in an isotypic decomposition of the regular
module, which implies 1 = (_H)
= (e_).
§.§ Pullback ideals from quasi-Hopf subalgebras
For any quasi-Hopf algebra H, a linear subspace A ⊂ H is a quasi-Hopf
subalgebra if it is a (unital) subalgebra,
such that Δ(A) ⊂ A A and S(A)
⊂ A, and which moreover contains the coassociator in the sense that ∈
A^ 3.
Denote by
=
HA→[A]
the canonical (forgetful) restriction functor, and note that it is strict monoidal.
We say that an A-module M lifts to the H-module N if
N ≅ M as A-modules.
Given a quasi-Hopf subalgebra A ⊂ H, we may now consider the category of
A-projective H-modules.
This is the full subcategory AH of consisting of
H-modules which are projective as A-modules, i.e. its objects are precisely the
lifts of projective A-modules.
In other words, it is a pullback in the sense of <Ref>,
AH
=
^∗( [[A]] )
,
and so in particular, AH is a tensor ideal in .
Since every ideal contains the projective ideal (cf. <Ref>), we
have the following chain of ideals
[]
⊂ AH ⊂ .
We also remark that the two extremes in (<ref>) are precisely the
H-projective and the -projective H-modules.
To see this without <Ref>, recall that H is free as an
A-module by the Nichols-Zoeller theorem for quasi-Hopf algebras
<cit.>.
In particular, every projective indecomposable H-module restricts to a direct
summand of a free A-module;
since [] is generated by indecomposable projectives, we get the
inclusion of ideals as in (<ref>).
Let now A ⊂ H be a unimodular quasi-Hopf subalgebra of the pivotal quasi-Hopf
algebra (H,).
If ∈ A, then (A,) is a unimodular pivotal quasi-Hopf algebra,
and thus admits a non-degenerate right modified trace ^A on
[[A]], constructed via its right symmetrised cointegral.
The same holds for the left version.
As a further application of <Ref> we now get a modified trace on
AH by pulling back ^A along the restriction functor.
Together with (<ref>), we obtain
a quasi-Hopf generalisation of <cit.>:
Abbreviate I(A) = AH, and let
^A ∈ A^*
be a right
(left) symmetrised cointegral for A, with corresponding right (left) modified trace
^A on [[A]].
For M∈ I(A),
the family of linear maps
([^A])_M : _I(A)(M) →
,
([^A])_M (f)
=
^A_ M(f)
,
defines a right (left) modified trace on I(A).
Explicitly,
with _A the unit of A,
([^A])_M (f)
= ∑_i=1^n ^A ( (p_i ∘ f ∘ j_i)(_A)
)
,
where n is a natural number and p_i M→ A, j_i
A →M are A-module morphisms, for i=1,...,n, such
that ∑_i=1^n j_i ∘ p_i = 𝕀_M and p_l ∘ j_m = δ_l,m p_l ∘
j_l.
Suppose H is ribbon.
In light of <Ref>, the pullback modified trace
from <Ref> is then always two-sided,
even if [[A]] has distinct right and left modified traces
^A,r and ^A,l.
In the example we consider below, ^A,r and ^A,l are not proportional, but the two pullback traces still turn out to agree up to a scalar (<Ref> and <Ref>).
§ THE SYMPLECTIC FERMION QUASI-HOPF ALGEBRAS
After presenting some general aspects of the theory of quasi-Hopf algebras, we now turn to the family of examples which we will use to evaluate link and three-manifold invariants. These are the symplectic fermion quasi-Hopf algebras, which in fact are ribbon and factorisable. They were constructed in
<cit.>
to reproduce the ribbon finite tensor categories obtained in <cit.> from the study of the chiral two-dimensional logarithmic
conformal field theory of symplectic fermions
<cit.>.
§.§ Quasi-Hopf algebra and ribbon structure
Here we define the family (N, β) of symplectic fermion quasi-Hopf algebras,
give the ribbon structure on (N, β) and
compute its (co)integrals.
The two parameters are a natural number N ≥1 and a complex number β s.t.
β^4 = (-1)^N. We fix such an N and β and abbreviate = (N, β).
§.§.§ Quasi-Hopf algebra structure
As an algebra, is the (unital associative) [C]-algebra generated by
{ , _i^ϵ | 1≤ i ≤ N, ϵ= ± }.
With the elements _0 = 12( + ^2) and _1 =
12( - ^2), the relations of are
{^±_i,} = 0 ,
{^+_i, ^-_j } = δ_i,j_1 ,
{^±_i, ^±_j } = 0 ,
^4 = ,
where {-,-} is the anticommutator.
Then _0, _1 satisfy _0 + _1 =, and are central orthogonal
idempotents.
A basis is given by
{𝔟(m, s, t) =
^m ∏_j=1^N (^+_j)^s_j (^-_j)^t_j| m ∈[Z]_4, s, t ∈[Z]_2^N
} ,
and the dimension of is 2^2N+2.
On generators, the coproduct and counit are
Δ()
= ⊗- (1+(-1)^N) _1 ⊗_1 ,
() = 1 ,
Δ(^±_i)
= ^±_i ⊗+ ω_±⊗^±_i ,
(^±_i) = 0 ,
where ω_± = (_0 ± i _1).
With _± = _0 + β^2 (± i)^N _1, the coassociator and its
inverse are
^± 1 =
⊗⊗
+ _1 ⊗_1 ⊗{_0(^N - )
+ _1(_± - )
} .
Finally, the antipode S and the evaluation and coevaluation elements
and are given by
S() = ^(-1)^N = (_0 + (-1)^N _1) ,
= ,
S(^±_i) = ^±_i (_0 ±(-1)^N i_1) ,
= _+ .
The inverse of the antipode is found to be
S() = ^(-1)^N ,
S(^±_i) = ω_±^±_i .
Note that S(_±)=S(_±)=_∓, and _+_-=.
The algebra is the direct sum of two ideals _i = _i, i=0, 1.
As algebras, _0 (resp. _1) is the semidirect product of a Grassmann algebra (resp. Clifford algebra) in 2N generators with the group algebra of [Z]_2. In particular, _1 is semisimple.
This decomposition imparts a [Z]_2-grading on the category of -modules:
[] = ([])_0 ⊕ ([])_1.
Since _1 is semisimple, ([])_1 is semisimple as an abelian (sub)category.
The monoidal structure on [] respects the grading, see <cit.> for details.
§.§.§ Further elements and structure
In <cit.>, the canonical elements from (<ref>) were computed to be
= + _1 ( _1 ( - ) ) ,
= + _0 ( _1 ( - ) ) ,
= + _1 {_0 (^N - ) + _1 ( - )
} ,
= _- + _1 _- {_0 (^N - ) + _1 (_- - )
} .
The Drinfeld twist and its inverse are given by
^± 1 = _0 + _1 _0 ^N
+ _1 _∓_1
.
is quasi-triangular.
The R-matrix and its inverse are
= ( ∑_n,m = 0^1 β^nmρ_n,m_n _m )
·∏_j=1^N ( - 2 ^-_j ω_- ^+_j)
and
=
∏_j=1^N ( + 2 ^-_j ω_- ^+_j)
·( ∑_n,m = 0^1 β^-nmρ_n,m_n _m )
,
where
ρ_n,m
= 1/2∑_k,l=0^1 (-1)^kl i^-kn + lm^k ^l
.
Its monodromy may be described as
= _21 = ∑_I g_I f_I , for I = (a, b, c, d)
with a, b ∈[Z]_2 and c, d ∈[Z]_2^N, and where
f_I =
^a _b ∏_k=1^N (^-_k)^d_k (^+_k)^c_k .
g_I =
(- β^2)^ab 2^|c| + |d|
(-1)^|c| + b |d|^b _a ∏_k=1^N (^+_k ω_-)^d_k (^-_k ω_-)^c_k ,
see <cit.>.
This is used there to further show that
the Hopf pairing in (<ref>) is non-degenerate,
i.e. that [] is factorisable.
Finally, is ribbon with (inverse) ribbon element
^± 1 = (_0 - β^± 1 i _1)
∏_j=1^N (∓ 2 (_0 ±_1) ^+_j ^-_j )
and so []
is a modular tensor category.
The pivot compatible with the ribbon structure is
= (_0 + (-i)^N+1_1 ^N) .
Note that =.
§.§.§ Integrals and cointegrals
In the basis (<ref>), any left
integral of is a scalar multiple of ∑_m = 0^3 𝔟(m,
1, 1), where by 1 ∈[Z]_2^N we mean the vector with 1 everywhere.
Since is unimodular[
This follows from the explicit form of given in <Ref> below. Or one can use that is factorisable which implies unimodularity, see <cit.> for the quasi-Hopf algebra result, and <cit.> for the general categorical statement.],
all left integrals of are also right integrals.
In <cit.> it was shown that the left (and, since H is ribbon, also
right) symmetrised cointegral yielding the non-degenerate modified trace on
[] satisfies
(𝔟(m, 1, 1))
= (𝔟(m, 1, 1))
= m (β^2 + i^m) ,
and is zero on the other basis elements. The normalisation coefficient
where ∈[C] will
be determined shortly.
By m we mean the Kronecker delta modulo 2, i.e. m = 1
if m is odd and 0 otherwise.
The corresponding left (and, since has order 2, also right) cointegral for
H is given by
^r (𝔟(m, 1, 1))
= ^l (𝔟(m, 1, 1))
=
(
m (β^2 + N i^m) +
mN
i^m
)
and zero elsewhere.
We now determine the coefficient . The integral of in the normalisation ∘ () = 𝕀_ was computed in <cit.> to be
( 𝔟(m, 1, 1) )
= ν (-1)^N β^2 2^-(N-1)δ_m, 0 ,
where ν∈{± 1} is the remaining sign freedom. We will for simplicity from now on choose ν = 1.
As in <Ref> we normalise the cointegral ^r of by (h) = ^r(S() h ).
One computes ^r(S() 𝔟(m, 1, 1)) = 2 β^2 δ_m, 0, whence
=
(-1)^N 2^-N .
Evaluating (<ref>) as in (<ref>) gives
= ∘^± 1∘
= ^r( S() ^∓ 1)
[(<ref>)] β^∓ 2 ,
which implies ^2 = = 1, and we choose = 1.
Then we have = β^-2 for the anomaly of [].
§.§ The modular tensor category qM
Here we briefly discuss the simple and projective modules of [] and give their internal characters.
Then we show that [] does not factorise into a product of
modular tensor categories. While we will not use this fact later on, we still think it is a noteworthy observation.
§.§.§ Simple and projective modules
The simple and projective modules of have been computed in <cit.>.
One finds that has four simple modules which we denote by X_0^±∈ ([])_0 and X_1^±∈ ([])_1.
The simple modules X_0^± are one-dimensional.
They are distinguished by the action of , which acts as ±1 on X_0^±. The tensor unit of [] is given by X_0^+.
As submodules of , the projective covers P_0^± of X_0^± are
generated by the primitive (non-central) idempotents
_0^± = 12(±)
_0
.
They are both 2^2N-dimensional, and the spaces of intertwiners from
P_0^ε to P_0^δ have dimension 2^2N-1 for ε, δ∈{+, -}.
The simple modules X_1^± are 2^N-dimensional and projective, and acts on their highest weight state as ± i. As X_1^± is projective, it can be realised as a direct summand of . The corresponding central (non-primitive) idempotents give 2^N copies of X_1^±. Namely, 2^N X_1^±⊂ is the image of
_1^±
= 1/2_1
(
∓ i ∏_j=1^N ( - 2 ^+_j ^-_j)
)
.
Since [] is factorisable, it has no transparent objects other than direct sums of the tensor unit (see <cit.> for equivalent characterisations of factorisability). The grade-0 component ([])_0 is also a braided finite tensor category, but it is not factorisable. The transparent objects in ([])_0 are precisely all finite direct sums of X_0^+ and X_0^-, see <cit.>. That X_0^- is transparent in ([])_0 can also be easily read off from the monodromy matrix in (<ref>).
Namely, when acting on V ⊗ X_0^- for any
V ∈ ([])_0, in the sum over I in , only the summand with I=0 ∈[Z]_2 ×[Z]_2 ×[Z]_2^N ×[Z]_2^N contributes, and this summand acts as the identity on V ⊗ X_0^-.
The tensor product closes on the set of simples and projectives, explicitly we have (see <cit.>)
X_1^ε⊗ X_1^δ≅ P_0^εδ ,
X_0^ε⊗ Y^δ≅ Y^εδ where
Y ∈{ X_0, X_1, P_0 } ,
In the Grothendieck ring [[]] of [], we have P_0^ε = 2^2N - 1 ( X_0^+ + X_0^-), so that the above tensor products result in the following product for the generators of [[]]:
X_0^δ ·X_0^ε
= X_0^δε ,
X_0^δ ·X_1^ε
= X_1^δε ,
X_1^δ ·X_1^ε
= 2^2N - 1 ( X_0^+ + X_0^- )
.
§.§.§ Internal characters and their S-transformation
Recall from <Ref> that (𝕀_) is equipped with an
action of , and that there are special natural transformations ϕ_V and
_(ϕ_V)
for any V ∈.
It is known from <cit.> (see <cit.> for a summary) that the map
[] →(𝕀_) , [V] ↦_(ϕ_V)
is an injective ring homomorphism.
This fact has been used in <cit.> to give a non-semisimple variant of the Verlinde formula.
For any algebra A, (𝕀_[A]) is canonically isomorphic to Z(A), the
centre of A. In the case of , the central elements χ_V and ϕ_V
corresponding to _(ϕ_V) and ϕ_V, respectively, have been worked
out in <cit.>.
Since χ_V, ϕ_V depend on V only through the class in [[]], knowing them on simple modules is
enough:
ϕ_X_0^±
=
2^N+1β^2 _0^±∏_j=1^N ^+_j ^-_j
, ϕ_X_1^± = ±
2^N+1_1^±
and
χ_X_0^± = _1 ±_0
, χ_X_1^±
= ±β^2 4^N _0 ∏_j=1^N ^+_j ^-_j
+ 2^N (_1^+ - _1^-)
.
As an immediate consequence of (<ref>), the map [[]] → Z(), [V] ↦χ_V is an injective ring homomorphism, and thus has to agree with the product given in (<ref>). Indeed, we have, for example
χ_X_0^-χ_X_1^+=χ_X_1^- ,
(χ_X_1^+)^2 = 2^2N (_1^+ + _1^-) = 2^2N-1(χ_X_0^++χ_X_0^-) .
§.§.§ Primality of MQ
Primality of MQ
Recall from e.g. <cit.> that a topologising subcategory of an
abelian category [A] is a full subcategory closed under finite direct sums and
subquotients in [A].
Recall further that the Müger centralizer M_[A]([X]) of a class of
objects [X] in the braided monoidal category [A] is the full subcategory
consisting of those objects in [A] which have trivial monodromy (i.e. double
braiding) with every object in [X].
Note that for a braided tensor category , we have M_() ≅, and
if is factorisable also M_() ≅ (see <cit.>).
In <cit.>, the following remarkable theorem was shown, extending
Müger's result to the non-semisimple case.
Before restating it, let us remark that a tensor subcategory
of a tensor category is a
subcategory closed under tensor products and containing the
tensor unit.
Let be a modular tensor category, and let [D] be a topologising tensor subcategory of .
If [D] is factorisable
with respect to the braiding inherited from ,
then ≅[D] ⊠ M_([D])
as ribbon categories.
This leads to the notion of prime modular tensor categories as those modular
tensor categories which do not admit a proper non-trivial (i.e. not
equivalent to or ) factorisable topologising
tensor subcategory.
We can now show that symplectic fermions provide a prime example of such a category.
The modular tensor category [] is prime.
Let [D] be a factorisable topologising tensor subcategory of [] which is not . We will show that then already [D]=[].
We start by showing that [D] contains the four simple objects of []. It contains = X_0^+ by definition. Since we assume [D] ≠,
and there are no self-extensions of the tensor unit <cit.>,
it must contain at least one more simple object.
Since [D] is factorisable, it cannot be wholly contained in ([])_0.
Otherwise, it would necessarily contain X_0^-,
the only simple object of ([])_0 that is not the tensor unit. But
we have seen in <Ref> that X_0^- is transparent in ([])_0.
Therefore, [D] must contain at least one of X_1^±. Suppose it contains X_1^+. Since X_1^+ ⊗ X_1^+ ≅ P_0^+ contains X_0^+ and X_0^- as subquotients,
we have by topologising property that X_0^-∈[D].
Since X_1^+ ⊗ X_0^- = X_1^-, [D] then also contains X_1^-. Finally, X_1^+ ⊗ X_1^- ≅ P_0^-, and so [D] contains all projectives and hence is equal to []. The argument starting from X_1^- instead of X_1^+ is the same.
One consequence of <Ref> is that [] is not a product of more elementary modular tensor categories,
and in particular one cannot directly reduce the study of general N to the N=1 case, see however an interesting observation in <cit.>.
§.§ Pullback ideals
In this section we consider an example of a pull back ideal and compute the resulting
modified trace.
We start with some general comments on quasi-Hopf subalgebras of (N,β) and
then specialise to a specific subalgebra A in H := (2,β).
§.§.§ Quasi-Hopf subalgebras in
Write W^+ ⊂ = (N,β) for the N-dimensional vector space linearly spanned by the ^+_i, and analogously for W^-.
The total space W^+ ⊕ W^- carries a symplectic form given by (^+_i,^-_j) = δ_i,j.
Before turning to the specific example we will study in more detail, let us look more generally at quasi-Hopf subalgebras A generated by
some subspace of
W^+ ⊕ W^- and by powers of .
First note that for w = w^+ + w^- ∈ W^+ ⊕ W^-, the coproduct (<ref>) gives Δ(w) = w ⊗ + _0 ⊗ w + i _1 ⊗ (w^+-w^-).
By assumption, _0, _1 ∈ A, and so if w^++w^- ∈ A, closure under the coproduct requires that also w^+-w^- ∈ A.
We may therefore pick two subspaces V^±⊂ W^± and define A = A(V^+,V^-) ⊂ to be the unital subalgebra generated by elements in V^+ ⊕ V^- and by . A quick look at the explicit expressions in <Ref> confirms that the coproduct and antipode restrict to A, and that ,∈ A and
^± 1∈ A^⊗ 3. The pivot from (<ref>) is also contained in A.
However, the R-matrix does not restrict to A (unless A=).
If the restricted symplectic form on V^+ ⊕ V^- is non-degenerate,
A is isomorphic as an algebra to (N',β') with N' = V^± and β' arbitrary.
The isomorphism maps ↦ and takes a symplectic basis of V^+ ⊕ V^- to the generators ^±_i of (N',β'). From the expressions in <Ref> one sees that this defines an isomorphism of quasi-Hopf algebras iff N' ≡ N (mod 2) and β^2 = β'^2.
The projective cover P_0(A) of the trivial A-module [C] (with acting as 1) is given by A _0^+, cf. (<ref>). The socle of P_0(A) is one-dimensional and acts as (-1)^ V^+ + V^-. Thus A is unimodular iff V^+ + V^- is even.
Altogether we see that each such choice of a unimodular quasi-Hopf subalgebra A gives a tensor ideal with modified trace via pull-back,
recall <Ref>.
We have thus a continuum of intermediate tensor ideals
in [],
but here we will study just one of them.
Namely, the example we study in detail now is
V^± = [C] _1^± ,
A = A(V^+,V^-) ⊂ (2,β) =: H
where β is any choice of 4th root of unity.
§.§.§ Lifts of projective modules of A
As an algebra (but not as a quasi-Hopf algebra), A agrees with one pair of symplectic fermions (1,β);
in particular, we know its projective modules.
Let [A] be the projective cover of the tensor unit of [A], and similarly let
P_0^-(A), X_1^+(A), X_1^-(A)
the remaining indecomposable projectives.
If we refer to the projectives in [H], we will write P_0^±(H) and X_1^±(H) instead.
The A-module [A] is four-dimensional and
is generated by a vector v_0 which has
-eigenvalue 1.
We set v_1^± = _1^± . v_0 and v_2 = _1^+ _1^- . v_0.
The -eigenvalue of v_2 is 1, while those of v_1^± are -1.
Let = [ a^- a^+; b^- b^+ ]∈Mat_2([C]).
On [A], we define the actions of _2^±∈ H as
_2^ε
= a^ε _1^- + b^ε _1^+
∈_( [A] )
, ε = ± .
It is not hard to verify that with this we indeed obtain an H-module structure on
[A], and we shall denote it by P_∈[H].
By construction, P_ is a lift of [A]
and we will now show that these are in fact all lifts:
Every lift of [A] to [H] is of the form P_ for
some ∈Mat_2([C]).
Moreover, P_≅ P_' in [H] iff = '.
To lift the A-module structure, we classify all possible _2^±-actions
on [A].
This is a linear algebra problem:
the algebra relations (in particular the anticommutators) of H have to
be satisfied by the matrices representing the actions of generators.
In the basis {v_0, v_1^-, v_1^+, v_2} of [A], the
action of the A-generators , _1^-, and _1^+ is given by
=
[ 1 0 0 0; 0 -1 0 0; 0 0 -1 0; 0 0 0 1 ]
,
_1^- =
[ 0 0 0 0; 1 0 0 0; 0 0 0 0; 0 0 1 0 ],
and _1^+ =
[ 0 0 0 0; 0 0 0 0; 1 0 0 0; 0 -1 0 0 ]
,
respectively.
The only matrices that anticommute with these are of the form
E_a,b =
[ 0 0 0 0; a 0 0 0; b 0 0 0; 0 -b a 0 ]
=
a _1^- + b _1^+
,
a,b ∈[C]
.
The matrices
E_a,b anticommute with E_c,d for any four
complex numbers a, b, c, d.
In particular, given (a^-, a^+, b^-, b^+) ∈[C]^4, we get a
H-module on which the action of _2^± is represented by
a^±, b^±
as in (<ref>).
By construction these are all possible lifts.
Now we show that these are pairwise non-isomorphic.
Let = [ a^- a^+; b^- b^+ ] and ' = [ c^- c^+; d^- d^+ ], and
suppose P_≅ P_'.
This isomorphism must come from an invertible endomorphism of
[A] intertwining the A-action, so is necessarily given
by the action with ξ_x,y = x
+ y _1^- _1^+, where x,y ∈[C] with x ≠ 0.
Requiring ξ_x,y to intertwine the _2^ε-actions on
P_ and P_'
forces a^ε = c^ε and
b^ε = d^ε, for ε = +, -.
Hence, ='.
The following simple prop:symp_ferm_f2_acting_on_PM to the proof of the above proposition will be useful later.
The element ^+_2 ^-_2 ∈ H acts on P_ as ·^+_1 ^-_1, and consequently ∏_j=1^2 ( - x ^+_j
^-_j) acts as - x (1 + ) ^+_1 ^-_1 for any x ∈[C].
By contrast, the simple projective modules of A do not admit lifts, because their dimensions are too small:
the grade 1 part of AH has to coincide with the grade 1 part of [H] by <Ref>.
Instead, a suitable direct sum of simple A-projectives lifts to each of the two simple projective of H, and so no continuum of parameters is involved.
§.§.§ Symmetrised cointegrals of A
Next, we describe the left and the right symmetrised cointegral of A, which
by <cit.> correspond precisely to the left and the right modified trace
on [[A]].
A symmetrised right resp. left cointegral for A is given by
( _1^+ _1^- ^m )
=
δ_m,1 - i β^2 δ_m,3resp.( _1^+ _1^- ^m )
=
δ_m,1 + i β^2 δ_m,3,
,
and by 0 on all other basis vectors.
The left and the right modified trace ^A,l and ^A,r on
[[A]] corresponding to the cointegrals
* are proportional on the grade 0 part and the grade 1 part of
[[A]] separately, but
* are not proportional on all of [[A]].
The final statement implies that the pivotal quasi-Hopf algebra A cannot be made ribbon, or equivalently,
that the pivotal structure on [A] cannot be extended to a ribbon structure.
We start by computing the right symmetrised cointegral.
Since A is pivotal and unimodular, this can be done by solving
(<ref>).
Using the explicit formulas for , from (<ref>),
(<ref>) becomes
(h) =
(h1)
_0 h2
+
(_0 h1)
_1 h2
+
(_1 h1)
_1 h2 .
Abbreviate h_m = ^+_1 ^-_1 ^m, and note that
Δ(h_m)
≈
h_m _0 ^m
+ h_m ^2 m_1 ^m
,
where ≈ means that we disregard terms that have fewer than all s in
the first tensor factor.
We then make the ansatz (^+_1 ^-_1 ^m) = c_m ∈[C] and zero otherwise.
Clearly this satisfies (<ref>) for any h not of the form
h_m.
Using the explicit expression (<ref>) for and the fact that
[, ] = 0, the cointegral equation (<ref>) for h
= h_m becomes
c_m · =
((h_m)1) ·_0 (h_m)2
- i
β^2
((h_m)1) ·_1 (h_m)2
=
c_m _0 ^m+1
- i β^2 c_m + 2 m_1 ^m+1 .
Immediately this implies c_0 = c_2 = 0, so we are only left with the two equations
c_1 · = c_1 _0 + i β^2 c_3_1
c_3 ·
=
c_3 _0 - i β^2 c_1_1
,
which imply c_3 = - i β^2 c_1, so that can be chosen as in the
statement of this prop:symCoint_for_subqHA.
A left symmetrised cointegral is given by ∘ S, see
<cit.>.
That is, up to a scalar is completely determined as
(∘ S) (^+_1 ^-_1 ^m)
= δ_m, 3 - i β^2 δ_m, 1
= - i β^2 (δ_m, 1 + i β^2 δ_m, 3)
,
and we choose - i β^2 = ∘ S, which yields
as in the statement.
We will now show that the associated modified traces are proportional on the degree
0 and the degree 1 component separately, but not simultaneously.
Let us write ^A,- := and ^A,+ :=, so that ^A,±(^m ^+_1 ^-_1) =
δ_m, 1± i β^2 δ_m, 3.
The endomorphism space of P_0^±(A) has as basis the identity and the map given
by acting
with _1 = ^+_1 ^-_1, and hence we need to
evaluate ^A,± on _0^± and _0^±_1.
Clearly, the value on _0^± is zero.
One easily computes
^A,±(_0^ε_1)
= 14∑_m=0^3 ε^m ^±,A(^m _1)
= ε1 ± i β^2/4 .
In the degree 1 component, indecomposable projectives only have the identity
endomorphism.
Using additivity of the modified trace, it is enough to compute its value on the
identity endomorphism
(-) ·_1^± of 2 X_1^±(A) ⊂ A (the image of _1^± is given above (<ref>)).
We have
^±,A (_1^ε)
= ε i ^±,A(_1 _1)
= ε i 1 ∓ i β^2/2 .
Since β^4 = 1, (1 - i β^2) i β^2 = 1 + i β^2.
Thus for all P ∈[[A]]
^A,r__0 P = - i β^2 ^A,l__0 P^A,r__1 P = i β^2 ^A,l__1 P ,
and so the two traces are not proportional.
§.§.§ Pullback modified traces from A
The modified traces on AH obtained via
pullback (<Ref> and <Ref>)
from the
modified traces in <Ref> are not non-degenerate.
Namely, we have:
The pullback modified traces vanish on projective H-modules:
[^A,r] |_[[H]]≡ 0
and [^A,l] |_[[H]]≡ 0
.
We use the suggestive notation [C]^1 | 0 and [C]^0 | 1
for the one-dimensional simple modules of both A and H,
on which acts as 1 and -1, respectively, and on
which the actions of the ^±_j vanish.
Similarly we write [C]^n | m = n [C]^1 | 0⊕ m
[C]^0 | 1.
We have [A, r]_V [C]^n | n(f 𝕀) = 0 for any V
∈[A] and f ∈_A(V), where [A,r] is the right partial trace
of [A].
Indeed, this follows from
[A]_[C]^1 | 0 (𝕀)
= []_[C]( )
= 1
[A]_[C]^0 | 1 (𝕀) = - 1
,
see (<ref>).
It is not hard to verify that
(X_1^±(H)) ≅ X_1^+(A) [C]^1 | 1 (P_0^±(H)) ≅ P_0^+(A) [C]^2 | 2
as A-modules.
Thus for any projective indecomposable P ∈[[H]] and any f ∈_H(P) we have e.g.
[^A,r]_P(f)
= ^A_P'(
[A,r]_P' [C]^n | n
(f')
)
,
where P' is X_1^+(A) or P_0^+(A), n is 1 or 2, and
f' is f conjugated by one of the isomorphisms in (<ref>).
Our goal is therefore now to show that in all situations f' will be a sum of terms of
the form g h, with h [C]^n | n→[C]^n | n of trace zero.
For the odd sector, i.e. P = X_1^±(H), this is obvious since
_H(X_1^±(H)) is one-dimensional,
and so h ∝𝕀.
For P = P_0^±(H), we note first that an explicit basis of the 8-dimensional
space _H(P_0^±(H)) is given by acting with the elements
, ^+_1 ^-_1
, ^+_1 ^+_2
, ^+_1 ^-_2
, ^-_1 ^+_2
, ^-_1 ^-_2
, ^+_2 ^-_2
, ^+_1 ^-_1 ^+_2 ^-_2
,
which are central on the even sector.
Let us denote the second isomorphism in (<ref>) by ϕ.
Explicitly, a basis of P_0^±(H) is given by
{ (_1^+)^a (_1^-)^b (_2^+)^c (_2^-)^d _0^± | a,b,c,d ∈[Z]_2 } ,
see <cit.>. A basis of P_0^±(A) is obtained by just using _1^±. If we denote a basis of [C]^2|2 by { (_2^+)^c (_2^-)^d | c,d ∈[Z]_2 }, then ϕ is simply given by
(_1^+)^a (_1^-)^b (_2^+)^c (_2^-)^d _0^± ⟼ (_1^+)^a (_1^-)^b _0^±⊗ (_2^+)^c (_2^-)^d ,
and it commutes with the action of _1^±.
One can now verify
that the transported actions ϕ∘ (^±_2.(-)) ∘ϕ^-1
and ϕ∘ (^+_2^-_2.(-)) ∘ϕ^-1
are of the form g ⊗ h with nilpotent h (which hence has trace zero).
The pullback traces [^A,r] and
[^A,l] are proportional.
By <Ref>, the degree 1 component of
AH equals the degree 1 component of
[[H]].
Thus both pullback traces vanish on ( AH )_1 by
<Ref>.
By <Ref>,
^A,r and ^A,l are proportional on the degree 0 component of
[[A]].
To make things more readable we choose a specific normalisation and set
[^A]
:=
4/1 - i β^2[^A,r]
=
4/1 + i β^2[^A,l]
,
which is well-defined since β^2 = ± 1.
§ EXPLICIT COMPUTATIONS OF LINK INVARIANTS
Throughout we will often write ε, δ, etc. for multiple
disjoint occurrences of ±.
To facilitate computations, we also use `approximate equalities' h ≈ g. The
exact meaning of the symbol ≈ will depend on the context, but in general it
means `with regards to our goal, h and g are the same'.
For example, if we are only interested in how h and g act on a fixed module, and
their actions coincide, then h ≈ g.
As another example, suppose h ∈ H H and we want to compute h_1 h_2, then h + _0 ^+_1 ^+_1 ≈ h, since
s anticommute with , and square to zero on the even sector.
§.§ Framed unknot
Recall the ribbon twist from (<ref>).
One shows via induction that
^± m
= (_0 + (- β^± 1 i )^m _1)
∏_j=1^N
(
∓ 2(m _0 ±m_1) ^+_j ^-_j
)
holds for natural m ≥ 0.
By convention, the ribbon twist is given by acting with ^-1, and so the n-framed unknot corresponds to the trace of ^-n, for n ∈[Z].
§.§.§ Categorical trace
The categorical trace of the projective module X_1^± vanishes, and it is clear that
acts as the identity on X_0^± since the 's act
trivially.
Thus, for n ∈[Z],
[[]]_X_0^±(^n)
= _X_0^±( S())
= _X_0^±()
= ± 1
,
as claimed in Table <ref>.
§.§.§ Modified trace on projective ideal
For the modified trace on the projective ideal, we find on the even sector that the invariant of the n-framed unknot (n ∈[Z]) coloured with P_0^ε is given by
_P_0^ε (^-n)
= (^-n_0^ε)
=
(2 n)^N
(
_0^ε∏_j=1^N ^+_j ^-_j
)
(*)=
n^N 12εβ^-2 .
In step (*) we used
(
_0^ε∏_j=1^N ^+_j ^-_j
)
(<ref>)=
1/4∑_m = 0^3 ε^m(^m
∏_j=1^N ^+_j ^-_j
)
(<ref>)(<ref>)=1/2εβ^2 (-1)^N 2^-N
(<ref>)= 2^-N-1εβ^-2 .
On the odd sector, indecomposable projectives are simple and therefore have
one-dimensional endomorphism spaces.
It was shown in <cit.> that ^± 1_1^ε = εβ^± 1_1^ε (where _1^ε was given in (<ref>)),
so that for integer powers we have ^-n_1^ε =
(εβ)^-n_1^ε.
The invariant of the n-framed unknot coloured by X_1^± is then given by
_X_1^ε(^-n)
= 2^-N_2^N X_1^ε(^-n)
= 2^-N (εβ)^-n_2^N X_1^ε
(𝕀) = 2^-N (εβ)^-n(_1^ε)
.
We find
(_1^ε)
=
- ε1/2 i
(-2)^N
(
_1 ∏_j=1^N ^+_j ^-_j
)
=
- ε1/2 i
(-2)^N
1/2
c_ (β^2 + i - β^2 - i^3)
=
ε1/2 ,
where c_ is the constant from (<ref>).
§.§.§ Pullback trace
For the pullback trace, we restrict ourselves to the situation worked out in <Ref>, i.e. to the pullback trace obtained from A ⊂ H = (2,β).
We first use <Ref> to
rewrite the action of ^-n on P_ as
^-n≈∏_j=1^2
(
+2n ^+_j ^-_j
)
= +2n (1 + ) ^+_1 ^-_1
.
Thus for n ∈[Z] the invariant of the n-framed unknot coloured by P_ is given by
[^A]_P_ (^-n)
= 2n (1 + )
4/1 - i β^2^A,r_P_0^+(A)(^+_1 ^-_1)
= 2 n (1 + ) ,
where we used (<ref>) and (<ref>) to see that in terms of the symmetrised cointegral for A,
^A,r_P_0^+(A)(^+_1 ^-_1) =
(_0^+^+_1 ^-_1) = (1-i β^2)/4
§.§ Framed Hopf link
We compute the Hopf link invariants
X,nU,m coloured with X ∈ and U ∈, and with n-fold twist on X and m-fold twist on U.
Since we can take U to be irreducible, the twist amounts to an overall factor times the invariant of X,nU,0. For the latter we need to compute the trace _X(^-n_(ϕ_U)_X) of the (-n)'th power of the ribbon element (<ref>) composed with the open Hopf link operator given in (<ref>).
We saw in Section <ref> that _(ϕ_U)_X is given by multiplication with the central element χ_U from (<ref>).
§.§.§ Categorical trace
For =,
the only thing to compute is the invariant of X_0^ε,nX_0^δ,m.
But with respect to the grade 0 part of [], X_0^± is transparent (see <Ref>)
and has trivial twist, therefore we just obtain the product of the categorical dimensions of the simple objects.
§.§.§ Modified trace on projective ideal
We first note that since X_1^δ is simple, the m-fold twist is given by multiplying with an overall factor, which in
this case is given by (δβ^-1)^m as explained below (<ref>).
Alternatively, one can directly use the twist eigenvalue given in <cit.>.
We start by computing P,nU,m for U = X_0^±.
If P is in the grade 0 part of [], we use that X_0^± is transparent relative to P and has trivial twist, resulting in the quantum dimension ±1 of X_0^± times the invariant of the n-framed P-coloured unknot already computed in <Ref>:
P_0^ε,nX_0^δ,m: εδ2 n^N β^-2 .
On the other hand, for P = 2^N X_1^ε we first compute the zero-framed invariant,
2^N X_1^ε,0X_0^δ,0: _2^N X_1^ε ( _(ϕ_X_0^δ)_2^N X_1^ε )
= (_1^ε)
(<ref>)= 12ε ,
and by linearity the invariant of X_1^ε, 0X_0^δ, 0 is then equal to ε 2^-N-1. Using the twist eigenvalue for X_1^ε and the fact that X_0^± has trivial twist, we get
X_1^ε,nX_0^δ,m: ε 2^-N-1 (εβ^-1)^n .
Next we compute P,nU,m for U = X_1^δ.
For the zero-framed Hopf link we have
2^N X_1^ε,0X_1^δ,0: _2^N X_1^ε ( _(ϕ_X_1^δ)_2^N X_1^ε )
= 2^N ( (_1^+ - _1^-) _1^ε )
= 2^N ε( _1^ε ) = 2^N-1 ,
whence the invariant of X_1^ε,nX_1^δ,m is 2^-1 (εβ^-1)^n (δβ^-1)^m.
Finally,
P_0^ε,nX_1^δ,0: _P_0^ε (^-n_(ϕ_X_1^δ)_P_0^ε )
(<ref>)=
δβ^2 4^N
( ^-n_0^ε∏_j=1^N ^+_j ^-_j )
(*)= δ 2^N - 1 ,
where the calculation for (*) is as in (<ref>), up to an overall factor of ε, together with the observation that from the ribbon element (<ref>) only _0 contributes, which acts as one on _0^ε. Thus we arrive at
P_0^ε,nX_1^δ,m: δ 2^N - 1 (δβ^-1)^m .
§.§.§ Pullback trace
The next remark shows that the invariant of P_,nX_0^±,m is ± 1 times that of the framed unknot nP_, while the invariant of P_,nX_1^±,m is zero.
Consider any link L with (at least) one of its components coloured by P_. In this situation the symplectic fermion example is somewhat special in the following sense.
By <Ref> it is enough to consider the case where the other components are labelled by simple objects. If any component is labelled by X_1^±, we can use that
X_1^± is in the pullback ideal. So we may as well compute the invariant via the
pullback trace on the appropriate endomorphism of X_1^±.
But by <Ref>, the pullback trace vanishes on
projective modules of the larger algebra. Hence to get a non-zero invariant, all other link components have to be decorated by X_0^±. But these objects are transparent in the degree 0 (see <Ref>),
and so just result in overall factors of ±1. Altogether, the invariant of L is given by the invariant of the P_-coloured component times the product of the quantum dimensions of the colours of the other components.
§.§ Torus knot
The torus knot mX for m ∈ 2 [Z]+1 is the braid closure of m
braidings.
Cutting open one strand results in the endomorphism (ξ_m)_X of X, defined as (omitting all tensor products):
(ξ_m)_X := [ X
X
(X X) X
X (X X)_=:F
X (X X)
(X X) X
X
X_=:G] .
Note that ξ_m defines a natural isomorphism of the identity functor and is therefore given by the action of a central element of the quasi-Hopf algebra, which by abuse of notation we also call ξ_m, i.e. (ξ_m)_X = ξ_m.(-). To compute ξ_m in terms of the quasi-Hopf algebra data in (<ref>),
(<ref>) and
(<ref>), first note that the maps F and G in (<ref>) read
F : u ↦∑_i=1^ X v^i _1 . v_i ⊗_2.u
,
G : w^* ⊗ u ⊗ v ↦⟨ w^* |_1 . u ⟩ _2.v ,
where { v_i } is a basis of X and { v^i } the dual basis of X.
For the explicit computation it will be convenient to distinguish m>0 and m<0. Namely,
write |m| = 2n+1 and
for m>0 set ξ^+_n := ξ_2n+1 and for m<0 set ξ^-_n := ξ_-(2n+1).
Then, explicitly,
ξ_n^+
= _2 _1 (^n)_1 _1
_1 _2 (^n)_2 _2
,
ξ_n^-
= _2 (^n)_2 _2 _1
_1 (^n)_1 _1 _2
,
where and are the multiplicative inverses
of and .
§.§.§ Categorical trace
Let us start with =.
Using the explicit formulas for the braiding and its inverse, one immediately sees that
and act as ± on X_0^± X_0^±.
Since on X_0^± also acts as ±, we find that ξ_n^± acts as
, and so [[]]_X_0^±(ξ_n^±) = [[]]_X_0^±(𝕀) = ± 1 as claimed in Table <ref>.
§.§.§ Modified trace on projective ideal
On the projective ideal, we used Mathematica to compute ξ^±_n for small values of
n and N, and then interpolated a (conjectural)
general formula from that.
Explicitly, we did the following, splitting the problem into odd and even sector.
On the even sector, the endomorphism spaces of P_0^± have basis ∏_j=1^N
(^+_j)^s_j (^-_j)^t_j with s, t ∈[Z]_2^N such that |s| +
|t| is even.
After multiplying with _0 (which acts as 𝕀 on P_0^±), these elements are central <cit.> and the endomorphism is given by acting with them.
The modified trace only sees the top component, i.e. the coefficient of the basis
vector
with all s_j and t_j equal to 1.
Thus, to compute the link invariant of mP_0^ε, it is
enough to extract the top component of the element ξ_m _0^ε.
Small values – more precisely, N = 1 for 0 ≤ n ≤ 10, N=2 for 0 ≤ n
≤ 5, and N = 3 for n = 0, 1 – suggest the following general formula
ξ_n^±|_Top component,
acting on
P_0^ε
= (± 2 (2n+1))^N _0 ∏_j=1^N ^+_j ^-_j
,
and from this one computes
_P_0^ε ( ξ_m )
= (2m)^N
(_0^ε∏_j=1^N ^+_j ^-_j)
(<ref>)=ε m^N 12β^-2 .
On the odd sector, projective modules are simple and thus ξ_n^± act as scalars.
Small values (N = 1 for 0 ≤ n ≤ 10, and N=2 for 0 ≤ n
≤ 5) suggest the following general formula
ξ^±_n |_acting on X_1^ε
= ε (2n + 1)^N β^± (2 n - 1)
= ε m^N β^m-2 ,
where we used m = ±(2n+1) and (±1)^N = β^±2 - 2.
From this one computes
_X_1^ε ( ξ_m )
= 2^-Nε m^N β^m-2(_1^ε)
(<ref>)=
2^-N-1 m^N β^m-2 .
§.§.§ Pullback trace
Let P_ be the lift of P_0^+(A) defined in Section <ref>.
We compute first how the braiding and the monodromy act on P_ P_.
A helpful lemma, which is not hard to verify, is the following.
Under the assumption that the tensor factors E ⊗ F below eventually act on u ∈ P_ as EXF.u or FXE.u for some X ∈, we have
∏_j=1^2 (x + y ^-_j ω_±^+_j)
≈
x^2
+ xy (1 + a^- b^+) ^-_1 ^+_1
+ xy a^+ b^- ^+_1 ^-_1
Let us first compute ξ_n^+.
Using the formula for the R-matrix from <Ref>, one
finds:
Under the assumption in <Ref>
we have
≈∑_r, s ∈[Z]_2
(-1)^rs(
12^r ^s
+ (1 + a^- b^+)
^r+1^-_1 ^s ^+_1
+ a^+ b^-
^r+1^+_1 ^s ^-_1
)
and for n ∈[N]
^n
≈
+ 2 n (1 + )
^-_1 ^+_1
- 2 n (1 + )
^+_1 ^-_1
.
We have
= 12(
∑_n, m, r, s ∈[Z]_2β^2nm (-1)^rs i^-rn+sm^r _n ^s _m
)
∏_j=1^2 ( - 2 ^-_j ω_- ^+_j)
Lem. <ref>≈12(
∑_r, s ∈[Z]_2
(-1)^rs^r ^s
)
(
- 2 (1 + a^- b^+) ^-_1 ^+_1
- 2 a^+ b^- ^+_1 ^-_1
)
= ∑_r, s ∈[Z]_2
(-1)^rs(
12^r ^s
+ (1 + a^- b^+)
^r+1^-_1 ^s ^+_1
+ a^+ b^-
^r+1^+_1 ^s ^-_1
)
Thus
= _21
≈∑_r, s ∈[Z]_2
(-1)^rs(
12^s
^r
+ (1 + a^- b^+)
^s ^+_1
^r+1^-_1
+ a^+ b^-
^s ^-_1
^r+1^+_1
)
× ∑_m, n ∈[Z]_2
(-1)^mn(
12^m ^n
+ (1 + a^- b^+)
^m+1^-_1 ^n ^+_1
+ a^+ b^-
^m+1^+_1 ^n ^-_1
)
=
12∑_r, s, m, n∈[Z]_2
(-1)^rs + mn[
^m+s^n+r(
12
+ (1 + a^- b^+)
^-_1 ^+_1
+ a^+ b^-
^+_1 ^-_1
)
+
(-1)^m+n^s+m^n+r(
(1 + a^- b^+)
^+_1 ^-_1
+ a^+ b^-
^-_1 ^+_1
)
]
One can show that
_0 _0 ·∑_r, s, m, n∈[Z]_2
(-1)^rs + mn^m+s^n+r
= 4 _0 _0
and
_0 _0 ·∑_r, s, m, n∈[Z]_2
(-1)^rs + mn + m + n^m+s^n+r
= -4 _0 _0 ,
whence
≈
2
[
(
12
+ (1 + a^- b^+)
^-_1 ^+_1
+ a^+ b^-
^+_1 ^-_1
)
-
(
(1 + a^- b^+)
^+_1 ^-_1
+ a^+ b^-
^-_1 ^+_1
)
]
=
+ 2 (1 + )
^-_1 ^+_1
- 2 (1 + )
^+_1 ^-_1
The expression for ^n from the statement now follows by induction.
Then we get
For n ∈[N], and under the assumption that we
evaluate with the modified trace on the module P_, we have
ξ^+_n
≈
2 (2n + 1) (1 + ) ^+_1 ^-_1
By definition of ξ^+_n, we will first compute ^n.
Using the preceding result,
^n
≈∑_r, s ∈[Z]_2
(-1)^rs(
12^r ^s
+ (1 + a^- b^+)
^r+1^-_1 ^s ^+_1
+ a^+ b^-
^r+1^+_1 ^s ^-_1
)
×(
+ 2 n (1 + )
^-_1 ^+_1
- 2 n (1 + )
^+_1 ^-_1
)
≈∑_r, s ∈[Z]_2
(-1)^rs×(
12^r ^s
+ (n (1 + ) + 1 + a^- b^+)
^r+1^-_1 ^s ^+_1
- (n (1 + ) - a^+ b^-)
^r+1^+_1 ^s ^-_1
)
.
By definition,
ξ_n^+
= _2 _1 (^n)_1 _1
_1 _2 (^n)_2 _2
,
and since we will evaluate the right symmetrised cointegral of the smaller algebra
A on _0^+ ξ_n^+, we see that we can also disregard the ^r ^s terms in the above expression for ^n.
Indeed, for these terms, no 's will appear in ξ^+_n _0^+, and so these
terms don't contribute to the final result.
Note now that on the tensor product of even sectors, and from
(<ref>) act as the identity, and also _0 = _0.
Thus
ξ_n^+
≈_1 (^n)_1 _2 (^n)_2
≈∑_r, s ∈[Z]_2
(-1)^rs×(
(n (1 + ) + 1 + a^- b^+)
^r+1^-_1 ^s ^+_1
- (n (1 + ) - a^+ b^-)
^r+1^+_1 ^s ^-_1
)
≈∑_r, s ∈[Z]_2
(-1)^rs + s
(2n + 1) (1 + )
^r+s^+_1 ^-_1
.
But now
_0 ∑_r, s ∈[Z]_2
(-1)^rs + s^r+s
= 2 _0
,
and so the claim follows.
As a result of all of the computations, we finally find
that the A-based invariant of the P_-coloured
torus knot 2n+1P_ is
[^A]_P_ (_0^+ ξ^+_n)
(<ref>)=
2 (2n + 1) (1 + )
4/1 - i β^2 ^A,r_P_0^+(A)(^+_1 ^-_1)
(<ref>)= 2 (2n + 1) (1 + ) .
Completely analogous computations for the inverse braiding and the associated monodromy
show that ξ_n^- ≈ - ξ_n^+, where ≈ means taking into account that
one evaluated with the modified trace.
§ LENS SPACE INVARIANTS FOR SYMPLECTIC FERMIONS
In order to do the computation, we will first make some observations
which simplify the problem and do not depend on the specific case of symplectic
fermions, and then we proceed with the actual computation.
§.§ Lens space invariants for quasi-Hopf algebras
Here we will rewrite the categorical expression in <Ref> in the case = [H] for a unimodular and twist-nondegenerate (recall (<ref>)) ribbon quasi-Hopf algebra H.
We start with the morphism f(a) : → from (<ref>). Recall from
(<ref>)
the isomorphism ρ∘ψ(𝕀_) →(, ). The natural endomorphisms of the identity functor are in bijection (as an algebra) with the centre of H.
If we call this last bijection ξ, then altogether
Z(H) (𝕀_) (, ) .
In (<ref>) we transported the endomorphisms _∘ (-), _∘ (-) of (, ) to endomorphisms _, _ of (𝕀_).
We denote their further transport to Z(H) by _Z and _Z. Explicit expressions for _Z,_Z :Z(H) → Z(H) in terms of quasi-Hopf data can be found in <cit.>.[
The formulas in <cit.> were computed in the factorisable case.
However, the same computation also works without assuming factorisability, resulting in the same expressions for the endomorphisms _Z and _Z of Z(H).
But they do not give a projective -action unless H is factorisable.
]
For example, _Z(z) = .z, while the expression for _Z is more involved. As noted in <Ref> (3), for an H-module V we set
ϕ_V = (ρ∘ψ)(χ_V), and we have χ_ =.
We denote V := ξ^-1(ϕ_V) ∈ Z(H), so that (<ref>) maps
V⟼ϕ_V ⟼χ_V .
See <cit.> for an explicit quasi-Hopf algebra formula for V.
Let us write
f(a) := (ρ∘ψ∘ξ)^-1(f(a)) ∈ Z(H).
Using the above notation, we obtain
f(a) =
(
∏_i=n^1
_Z^a_i∘_Z)
()
.
Given a tensor ideal ⊂, a modified trace on and an object P ∈, we obtain a (possibly degenerate) symmetric and invariant bilinear form on Z(H) via
(z, z')_P
= _P(z z') .
We can now reformulate the categorical expressions from <Ref> as follows:
In terms of quasi-Hopf data, the invariants of 𝔏_x(α, P), x ∈{*,∘} from <Ref> read
(𝔏_x(α, P))
= ^-σ^-1 -n
(α,ζ_x)_P
where α∈ Z(H) represents the natural transformation α of the identity functor and
ζ_* = ⟨ | f(a)⟩· ,
ζ_∘ =
⟨ |
S(_1 _1)
f(a)_2 _1 _2
⟩·_3 _2 _3
.
Here, ∈ H^* is the integral of , see <Ref>.
Let z ∈ Z(H) and define E(z) → by E(z) ∘_X = _X ∘ (𝕀ξ(z)_X).
The dinatural transformation _X X^* ⊗ X → of is explicitly given in (<ref>).
From this it is easy to check that for f ∈ = H^* we have
E(z)(f) = f((-) · z).
Using the definition of the coproduct Δ_ (see <cit.>), one can verify the identity (E(z) ⊗𝕀) ∘Δ_ = (𝕀⊗ E(z)) ∘Δ_.
The morphism g := ψ(ξ(z)) ∈(,) = H^** is defined by the dinatural transformation
g ∘_X = _X ∘ (𝕀⊗ξ(z)_X). Comparing to the definition of the counit and to that of E(z) above, we get ψ(ξ(z)) = ∘ E(z).
The isomorphism ρ : (,) →(,) acts on g as <cit.>
ρ(g) = [ ] .
But (g ⊗𝕀) ∘Δ_ = ( ⊗𝕀) ∘ (E(z) ⊗𝕀) ∘Δ_
= ( ⊗ E(z)) ∘Δ_. Hence altogether
ρ(ψ(ξ(z))) = E(z) ∘Λ = ⟨ | (-) · z ⟩ .
After these preparations, we can compute ζ_* and ζ_∘:
ζ_*: We have
∘ f(a)
(<ref>)=
∘ E(f(a)) ∘Λ
=
⟨ | f(a)⟩ ,
where the last equality follows from the explicit form of
∈(,)=H^**,
which acts by evaluating an element of H^* on (see <cit.>).
ζ_∘:
We need to compute _P(α_P ∘ Q_P ∘ (f(a) 𝕀_P)), where Q 𝕀_𝕀_ is the natural transformation defined in
(<ref>).
It is not hard to see that the action of Q_V V → V on f v ∈ V is given by
Q_V (f v)
=
⟨
f | S(_1 _1) _2 _1 _2
⟩·_3 _2 _3 . v
.
Substituting f = f(a) = E(f(a)) ∘Λ = ⟨ | (-) ·f(a)⟩ and using that f(a) is central gives the expressions as in the statement of the proposition.
§.§ The central element f(a) for symplectic fermions
After rewriting the lens space invariants for a general
unimodular and twist-nondegenerate
ribbon quasi-Hopf algebra in <Ref>, in this section we specialise to the symplectic fermion quasi-Hopf algebra H = (N, β), and we will work out the elements f(a)
introduced in (<ref>).
Recall from (<ref>) our normalisation of the integral :
→ in =,
= β^6 2^-(N-1) ^*
,
where we abbreviated = ∏_j=1^N ^+_j ^-_j.
We will need the internal characters V from (<ref>) for (N, β).
For V = X_0^±, X_1^±, they were given in (<ref>). And since V only depends on the class of V in the Grothendieck ring, for V = P_0^± we have
P_0^± = 2^2N-1X_0^+ + 2^2N-1X_0^- .
Recall from <cit.> that the centre Z(H) of H can be decomposed as
Z(H) = Z_P ⊕ Z_Λ
where the two summands have bases
Z_P : {P_0^+, X_1^+, X_1^-} ,
Z_Λ : {_0 ∏_j=1^N (^+_j)^s_j (^-_j)^t_j | s, t ∈[Z]_2^N, |s| + |t| ∈ 2[Z]
} .
The dimension of the centre is Z(H) = Z_P + Z_Λ = 3 + 2^2N-1.
We fix
pq = [a_n; a_n-1, …, a_1]
subject to the positivity condition (<ref>).
The central element f(a) = ( ∏_i=n^1 _Z^a_i∘_Z) () is given by
f(a) =
f(a)|_Z_P +
f(a)|_Z_Λ ,
where
f(a)|_Z_P =
2^-2N(
c_n^0 P_0^+ + c_n^+ X_1^+ + c_n^- X_1^-) ,
f(a)|_Z_Λ =
2^N β^6n + 2∑_s ∈[Z]_2^N
p^|s|( q/2)^N - |s|_0 ∏_j=1^N ( ^+_j ^-_j )^s_j .
The coefficients c_i^0,± are determined recursively by
c_i^0 = 2^-N ( c_i-1^+ - c_i-1^- ) ,
c_i^± = 1/2 (± 1)^a_iβ^-a_i
(c_i-1^+ + c_i-1^- ± 2^N c_i-1^0) ,
for 1 ≤ i ≤ n, with initial values c_0^0 = 1 and c_0^± = 0.
The proof of this proposition will be given in the remainder of this subsection.
The decomposition (<ref>) respects the
-action on Z(H), i.e. Z_P and Z_Λ are subrepresentations.
The central element
=X_0^+
from (<ref>) splits according to the direct sum decomposition (<ref>) as
= 2^-2NP_0^+ + 2^N β^2 _0 .
In the ordered basis of Z_P given above, the S- and T-generators act via the
matrices (see <cit.>)
_Z_P =
[ 0 2^-N -2^-N; 2^N-1 1/2 1/2; -2^N-1 1/2 1/2 ]_Z_P =
[ 1 0 0; 0 β 0; 0 0 -β ] ,
and it is not too hard to see that the coefficients in
(
∏_i=n^1 _Z_P^a_i∘_Z_P)
(P_0^+)
= c_n^0 P_0^+ + c_n^+ X_1^+ + c_n^- X_1^- ,
satisfy the recursive relation
given in (<ref>).
For later use we note that the recursion relation (<ref>) implies that c_i^0 ∈{0}∪{β^j | j ∈[Z] } for all i. To see this set g_i = c_i^0 and h_i = 2^-N(c_i^+ + c_i^-). Then g_0 = 1, g_1=0, h_0 = 0, and the recursion now reads:
g_i = c^+_i-1 - c^-_i-1/2^N = β^-a_i-1
g_i-2 ; a_i-1 even
h_i-2 ; a_i-1 odd ,
h_i = β^-a_i
h_i-1 ; a_i even
g_i-1 ; a_i odd
Thus the condition g_i, h_i ∈{0}∪{β^j | j ∈[Z] } is satisfied by the initial conditions and preserved by the recursion.
To compute the action of ∏_i=n^1 _Z^a_i∘_Z
on _0, let 1 ≤ j ≤ N and let Z_,
j be the vector space with basis _0 ^+_j ^-_j and _0.
We consider it as an algebra in the obvious way.
Denote by Z_ the subspace of Z_Λ generated (as an
algebra with unit _0) by _0 ^+_j ^-_j, 1 ≤ j ≤ N.
There is an algebra isomorphism
κ⊗_j=1^N Z_, j→ Z_
, x_1 … x_N ↦ x_1 ·…· x_N
.
Then by <cit.>, the -action on Z_Λ restricts
to an action on Z_, which can be described as
_Z_Λ|_Z_
= κ∘( β^2 ·⊗_j=1^N S ) ∘κ_Z_Λ|_Z_
= κ∘( ⊗_j=1^N T ) ∘κ ,
where S and T are the matrices
S = [ 0 2; - 1/2 0 ]
T = [ 1 2; 0 1 ]
in our chosen basis {_0 ^+_j ^-_j, _0} of Z_, j for
each j.
Let b_1, …, b_n be non-zero integers, n ≥ 1.
Then
(
∏_i=n^1
_Z_Λ^b_i∘_Z_Λ)
(_0 )
=
β^6n(
1/2∏_j=1^n-1
[b_j; b_j-1, …, b_1]
)^N
∑_s ∈[Z]_2^N(
2 [b_n; b_n-1, …, b_1]
)^|s|_0
∏_j=1^N
( ^+_j ^-_j )^s_j .
We claim first that the left hand side of (<ref>) equals
(-1)^Nnβ^2n
2^-N(
∏_j=1^n-1
[b_j; b_j-1, …, b_1]
)^N
∏_j=1^N
(
_0 + 2 [b_n; b_n-1, …, b_1] _0 ^+_j ^-_j
)
,
and we will proceed now to prove this via induction.
Clearly _0 ∈ Z_.
For any m ∈[N], we have
T^m S =
[ -m 2; -12 0 ] .
This implies the base case for induction on n, i.e. we have
( ^b_1_Z_Λ∘_Z_Λ)
(_0 )
=
(-1)^N
β^2
2^-N∏_j=1^N ( _0 + 2 b_1 _0 ^+_j ^-_j )
.
Assume now that it is true up to n.
Then
(
(-1)^Nnβ^2n
2^-N(
∏_j=1^n-1
[b_j; b_j-1, …, b_1]
)^N
)(
∏_i=n+1^1
_Z_Λ^b_i∘_Z_Λ)
(_0 )
=
(
_Z_Λ^b_n+1∘_Z_Λ)
∏_j=1^N
(
_0 + 2 [b_n; b_n-1, …, b_1] _0 ^+_j ^-_j
)
=
β^2
∏_j=1^N
(
2 _0 ^+_j ^-_j
+ 2 [b_n; b_n-1, …, b_1]
(
- b_n+1_0 ^+_j ^-_j
- 1/2_0
)
)
(*)=
(-1)^N β^2
( [b_n; b_n-1, …, b_1] )^N
∏_j=1^N
(
_0 + 2 [b_n+1; b_n, …, b_1] _0 ^+_j ^-_j
)
,
as claimed.
In step (*) we used that by definition of the continued fraction in (<ref>) we have
[b_n+1; b_n, …, b_1]
= b_n+1 - 1/[b_n; b_i-1, …, b_1] .
Multiplying out the product
and using β^4 = (-1)^N
yields (<ref>).
Because of the assumptions on the continued fraction,
<Ref> gives
∏_j=1^n-1 [a_j; a_j-1, …, a_1]
= q/p∏_j=1^n [a_j; a_j-1, …, a_1] = q .
The formula then follows from <Ref>.
§.§ Categorical trace: Lyubashenko invariant
By (<ref>), it remains to compute ∘ f(a) which we know from (<ref>) to be equal to ⟨ | f(a)⟩ (recall that = for (N, β)).
Since by (<ref>), is proportional to ^*, it can only see part of
f(a) which lives in Z_Λ.
Using <Ref> we can simply read off
⟨ | f(a)⟩ =
2
β^6n∑_s ∈[Z]_2^N
p^|s|( q/2)^N - |s|⟨^* |
_0 ∏_j=1^N ( ^+_j ^-_j )^s_j⟩
= β^6n p^N
Combining this with = 1 and = β^-2 from <Ref>, altogether (<ref>) gives
(𝔏(p, q))
= p^N
.
§.§ Modified trace: Renormalised Lyubashenko invariant
Here we compute the renormalised Lyubashenko invariant (𝔏_x(α, P)), x ∈{ *,∘} with the modified trace on the projective ideal.
To do so, according to <Ref> we need to compute the central elements ζ_x and the pairings (-,-)_U.
We start with the explicit expressions for ζ_x for which we will need two preliminary lemmas. For z ∈ Z() write
F(z) = ⟨ |
S(_1 _1)
z _2 _1 _2
⟩·_3 _2 _3 .
Note that F(f(a)) = ζ_∘.
F(z) = ⟨ | z _1 ⟩·_2
From the expression (<ref>) for the coassociator of , we
immediately see that the claim is true if z is in the even sector, and
therefore we must only show it for z = _1 z.
The first and second tensor factor of ^±1 in F(z) are then multiplied by _1, and we have
(_1 ⊗_1 ⊗) ·^±1
= _1 ⊗_1 ⊗(
_0 ^N
+ _1 _±) .
Inserting this simplified expression in F(z) gives
F(z) =
⟨ | z _1 ⟩(
_0 ^N
+ _1 _+
)
_2
(
_0 ^N
+ _1 _-
)
=
⟨ | z _1 ⟩(
_0 ^N _2 ^N
+ _1 _+ _2 _-
)
,
where in the second step we used that _0, _1 are central.
Since z is central, it in particular commutes with and hence contains an even number of s. Furthermore, ∝^* and so the amount of s in the monodromy factors that contribute to the above expression is even as well.
It follows that the non-zero summands _2 commute with and therefore also with ^N and _±.
Since ^2N_0 = _0 and _+_- =, the claim follows.
Next we compute F(z) for some special values of z:
We have
F(X_1^±)
=
± 2^-NP_0^+
+ 12 (X_1^+ + X_1^-) ,
F(P_0^+)
= 2^N - 1 (X_1^+ - X_1^-) ,
F(_0 ∏_j=1^N (^+_j ^-_j)^s_j)
= β^2 2^N - 2 |s| (-1)^|s|_0 ∏_k=1^N (^+_k ^-_k)^1 - s_k for s ∈[Z]_2^N .
Recall from
<Ref> that the monodromy matrix of can be written
as = ∑_I ∈ X g_I f_I, where the index set X is
[Z]_2 ×[Z]_2 ×[Z]_2^N ×[Z]_2^N, and for
I = (a,b,c,d) ∈ X, f_I and g_I are as in
(<ref>).
Substituting (<ref>) and (<ref>), one computes
F(X_1^±)
=
±β^6 2^2∑_I ∈ X f_I ×⟨^* | _1^± g_I ⟩
(a)=
±β^6 2^2∑_b,c
(- β^2)^b 2^2 |c| (-1)^b |c|·_b ∏_k=1^N (^-_k ^+_k)^c_k×⟨^* |
_1^±^b ∏_k=1^N (^+_k ^-_k)^c_k⟩
(b)=
±β^6 2
∑_b,c
(- β^2)^b 2^2 |c| (-1)^b |c|·_b ∏_k=1^N (^-_k ^+_k)^c_k
×(
δ_b, 0δ_c, 1⟨^* | _1 ⟩± i
δ_b, 1⟨^* |
_1
∏_k=1^N
(
δ_c_k, 0
- (2 δ_c_k, 0 + δ_c_k, 1) ^+_k ^-_k
)
⟩)
(c)=
±β^6
(
2^2 N_0 ∏_k=1^N ^-_k ^+_k
∓ i (-1)^N β^2
·_1
∏_k=1^N
∑_c_k = 0^1
(-4)^c_k( 2 δ_c_k, 0 + δ_c_k, 1)
(^-_k ^+_k)^c_k)
(d)=
±β^6 (-1)^N
2^2 N_0
- i 2^N
_1 ∏_k=1^N ( - 2 ^+_k ^-_k ) .
In step (a) we used that _1^±_a = δ_a,1_1^±, that
the evaluation against ^* can be non-zero only if each ^+_k is paired with ^-_k, resulting in d=c,
and that _1 ^+_k ω_- ^-_k ω_- = -_1 ^+_k^-_k, giving an additional factor (-1)^|c|.
In step (b) we substituted the explicit form of _1^± in (<ref>) and used
_1( - 2 ^+_k ^-_k) (^+_k ^-_k)^c_k =
_1 (δ_c_k, 0
- (2 δ_c_k, 0 + δ_c_k, 1) ^+_k ^-_k).
For step (c) note that only the product with all ^+_k ^-_k can be non-zero against ^*, resulting in a factor
∏_k=1^N ( 2 δ_c_k, 0 + δ_c_k, 1).
In addition, we replaced ∑_c ∏_k=1^N ∏_k=1^N ∑_c_k=0^1. The extra factor of (-1)^N in both summands in step (d) arises from reordering the s.
To write this in our preferred basis of the centre, note from (<ref>) and (<ref>) that
P_0^+ = 2^3Nβ^2 _0 ,
X_1^+ + X_1^- = 2^N+1( _1^+ - _1^- )
= -2^N+1 i _1 ∏_j=1^N ( - 2 ^+_j ^-_j)
,
X_1^+ - X_1^- = 2^N+1( _1^+ + _1^- )= 2^N+1_1
.
From this we read off
F( X_1^± )
=
± 2^-NP_0^+
+ 12 (X_1^+ + X_1^-)
,
as claimed.
For the remaining two equalities, first note that for z ∈ Z() in the even sector, i.e. z = _0 z, we have
F(z)
=
β^6 2^-N+1∑_I ∈ X f_I ⟨^* | z g_I ⟩
=
β^6 2^-N+1∑_b, c
2^2 |c|
(-1)^b |c|⟨^*
| z
^b ∏_k=1^N (^+_k ^-_k)^c_k⟩_b ∏_k=1^N (^-_k ^+_k)^c_k .
This immediately yields
F( P_0^+ )
= 2^2N_1
= 2^N - 1 (X_1^+ - X_1^-)
and
F(_0 ∏_j=1^N (^+_j ^-_j)^s_j)
= β^6 2^-N∑_b, c
2^2 |c|⟨^*
|
∏_j=1^N (^+_j ^-_j)^s_j + c_j⟩_0 ∏_k=1^N (^-_k ^+_k)^c_k
=
β^6
2^N - 2 |s|
(-1)^N - |s|_0
∏_k=1^N (^+_k ^-_k)^1 - s_k .
With these preparations, we can compute ζ_*, ζ_∘ from <Ref>:
We have
ζ_* = ⟨ | f(a)⟩·
= β^6n p^N ·
ζ_∘ = F(f(a))
=
2^-2N(
c_n+1^0 P_0^+
+ c_n+1^+ X_1^+
+ c_n+1^- X_1^-)
+
β^6n
p^N
∑_s ∈[Z]_2^N(- 2 qp)^|s|_0
∏_j=1^N
(^+_j ^-_j)^s_j .
In the expression for ζ_∘ we have extended the recursive definition of c_i^0,± in (<ref>) to n+1 by setting a_n+1 = 0.
The expression for ζ_* was already computed in <Ref>. To get the result for ζ_∘, recall from <Ref> the decomposition of f(a) into summands in Z() = Z_P ⊕ Z_Λ.
Using the formulas from the preceding
prop:central_element_of_natural_transformation_SF, we get
F( f(a)|_Z_Λ)
Prop. <ref>=
2^N β^6n + 2∑_s ∈[Z]_2^N
p^|s|( q/2)^N - |s|
F(_0 ∏_j=1^N ( ^+_j ^-_j )^s_j)
Lem. <ref>=
2^N β^6n + 4_0
∏_j=1^N
∑_s_j = 0^1
p^s_j( q/2)^1 - s_j
2^1 - 2 s_j (-1)^s_j( ^+_j ^-_j )^1 - s_j
Lem. <ref>=
2^N β^6n + 4_0
∏_j=1^N
(
q ^+_j ^-_j
- p 2^-1)
Lem. <ref>=
β^6n
p^N
∑_s ∈[Z]_2^N(- 2 q/p)^|s|_0
∏_j=1^N
(^+_j ^-_j)^s_j .
On the projective centre Z_P, we find
F( f(a)|_Z_P)
Prop. <ref>=
2^-2N(
c_n^0 F(P_0^+) + c_n^+ F(X_1^+) + c_n^- F(X_1^-)
)
Lem. <ref>=
2^-2N(
2^-N( c_n^+ - c_n^- )
P_0^+
+
(
c_n^0 2^N-1
+ 12 (c_n^+ + c_n^-)
)
X_1^+
+
(
- c_n^0 2^N-1
+ 12 (c_n^+ + c_n^-)
)
X_1^-)
.
The coefficients match the recursive definition of c^0_n+1, c_n+1^± from
(<ref>) under the convention that a_n+1=0. Adding the two expressions gives the claimed result for ζ_∘.
This completes the computation of ζ_* and ζ_∘ and we now turn to the pairings (-,-)_P from (<ref>).
Let be the modified trace associated with the symmetrised cointegral
in (<ref>), which was explicitly given by
(^m ) = m (β^2 +
i^m) where = (-1)^N 2^-N .
Fix an isotypic decomposition = ⊕_U n_U U with
idempotents e_U and where the sum runs over distinct irreducibles U.
We abbreviate
{z, z'}_U:=(z, z')_n_U P_U.
In terms of the symmetrised cointegral, the pairing is given by
{z, z'}_U
= ⟨ | e_U z z' ⟩ .
Explicitly, n_X_0^±=1 and n_X_1^±=2^N, so that
{z, z'}_X_0^± = (z, z')_P_0^± , {z, z'}_X_1^± = 2^N (z, z')_X_1^± .
The next lemma gives the value of the pairing on the basis (<ref>) of Z().
* For U = X_0^± we have
{_0, P_0^+}_X_0^±
= 2^2N - 1
and
{_0 ∏_j=1^N (^+_j)^r_j (^-_j)^s_j
, _0 ∏_j=1^N (^+_j)^t_j (^-_j)^u_j}_X_0^± =
±εδ_r + t, 1δ_s + u, 1
2^-N - 1β^-2 ,
where ε = ε(r,s,t,u) is a sign, which
is +1 if r = s or t = u.
For all other combinations of basis elements in (<ref>), the pairing is zero.
* For U = X_1^±, the only non-zero values of the pairing are
{X_1^±, X_1^±}_X_1^± = ± 2^2N+1
Note first of all that the map
z ↦{z, -}_U
is zero on the odd part of z if
U is from the even sector.
The same is true when swapping even and odd.
(1)
For U = X_0^± we have e_U = _0^± as in (<ref>).
From (<ref>) we see that P_0^+ = 2^3Nβ^2 _0 already contains as a factor and so the only non-zero pairing can be with the basis element _0. We compute
{_0, P_0^+}_X_0^± =
2^3Nβ^2
⟨ | _0^±⟩
= 2^2N - 1 .
Let now r, s, t, u ∈[Z]_2^N with |r| + |s| and |t| + |u| even.
We have
(
_0 ∏_j=1^N (^+_j)^r_j (^-_j)^s_j)
·(
_0 ∏_j=1^N (^+_j)^t_j (^-_j)^u_j)
=
ε_0 ∏_j=1^N (^+_j)^r_j + t_j (^-_j)^s_j + u_j ,
where ε is a sign depending on r, s, t, u, which we do not compute
explicitly.
If r = s or t = u, this sign is +1 as a pair ^+_j ^-_j commutes with everything in the even sector.
This shows that
{_0 ∏_j=1^N (^+_j)^r_j (^-_j)^s_j
, _0 ∏_j=1^N (^+_j)^t_j (^-_j)^u_j}_X_0^±
=
εδ_r + t, 1δ_s + u, 1⟨ | _0^±⟩
(<ref>)=
±εδ_r + t, 1δ_s + u, 1
2^-N - 1β^-2 .
(2)
For the odd sector, recall from (<ref>) that X_1^± = ± 2^N + 1_1^±, and since
_1^± are orthogonal, we only need to compute:
{X_1^±, X_1^±}_X_1^± =
2^2N+2⟨ | _1^±⟩(<ref>)=
±
2^2N+1
as claimed.
Now we are ready to provide the invariants for all natural endomorphisms of the identity functor,
acting on the non-contractible loop in L(p, q) as given in (<ref>).
We have, for the basis elements in (<ref>) and t ∈[Z]_2^N,
(𝔏_∘(_0 ∏_j=1^N (^+_j ^-_j)^t_j,
P_0^±))
=
1/2 c_n+1^0 β^2n±1/2β^2 q^N
, t = 0,
± (-1)^|t|β^2
2^- |t| - 1 q^N - |t| p^|t|
, |t| > 0
(𝔏_∘(P_0^+, P_0^±))
= 2^2N-1 p^N
(𝔏_∘(X_1^±, X_1^±))
= ± 2^1-Nβ^2 n c^±_n+1 .
Note that combining the first formula in the theorem above for t=0 with <Ref> results in the invariant (𝔏_∘(𝕀, P_0^±)) as stated in <Ref>.
Recall from <Ref> that
(𝔏_∘(α, P)) = ^-σ^-1 -n
(α,ζ_∘)_P,
with
σ=n by (<ref>) and
ζ_∘ is given in <Ref>.
From <Ref> we have = 1 and = β^-2 so the surgery coefficient of 𝔏_∘ is β^2σ.
Let t ∈[Z]_2^N with |t| ≥ 1.
Using <Ref>, we compute
{_0 ∏_j=1^N (^+_j ^-_j)^t_j, ζ_∘}_X_0^±
=
β^6n p^N
∑_s ∈[Z]_2^N(- 2 qp)^|s|{_0 ∏_j=1^N (^+_j ^-_j)^t_j , _0 ∏_j=1^N (^+_j ^-_j)^s_j}_X_0^±
=
±β^6n + 6 p^N
2^-N-1∑_s ∈[Z]_2^N(- 2 qp)^|s|δ_s,1 - t
=
±β^6n + 6 p^N
2^-N-1(- 2 qp)^N - |t| = ±
(-1)^|t|β^6n + 2
2^- |t| - 1
q^N - |t|
p^|t| .
If |t| = 0, then
{_0,ζ_∘}_X_0^± =
2^-2N c_n+1^0
{_0,
P_0^+}_X_0^±
+
β^6n
(-2q)^N
{_0,
_0 }_X_0^±
=
2^-2N c_n+1^0
2^2N - 1±β^6n
(-2q)^N
2^-N - 1β^2 (-1)^N
=
12
c_n+1^0
±12β^6n+2
q^N
.
Finally, we have
{P_0^±,ζ_∘}_X_0^± =
β^6n
p^N
∑_s ∈[Z]_2^N(- 2 qp)^|s|(
P_0^±,
_0
∏_j=1^N
(^+_j ^-_j)^s_j}_X_0^±
=
β^6n
p^N
{P_0^±,
_0
}_X_0^± = 2^2N-1β^6n p^N
, and
{X_1^±,ζ_∘}_X_1^± =
± 2 c_n+1^± .
Combining with (<ref>) to translate {-,-}_U into (-,-)_P_U results in the statement of the theorem.
Let us discuss a quick sanity check of the formulas above.
In the category of modules over a factorisable (or, more generally, unimodular)
ribbon quasi-Hopf algebra H, there is a natural transformation α𝕀_𝕀_
such that α_P_ is non-zero and factors through , cf. <Ref>.
Indeed, if c is an integral in H, then it is central by unimodularity, and thus
acting by it yields a natural transformation.
In the notation of <Ref>, we must then necessarily have
(𝔏_*(c, )) =
(𝔏_∘(c, )).
Indeed, this can be checked directly.
Let c be the integral of corresponding to with
normalisation such that ∘ = 𝕀_.
By <cit.>, it is explicitly given by
c = 2^N+1β^2 _0^+ (<ref>)= (<ref>)= 2^-2NP_0^+ + 2^N β^2 _0 .
We saw in the proof of <Ref> that _P_0^+(c) = 1.
On the one hand, combining (<ref>) and (<ref>) gives
(𝔏_*(c, P_0^+))
= p^N
.
On the other hand, using <Ref> and the expansion of c in the basis of the centre we get
(𝔏_∘(c, P_0^+))
=
2^-2N (𝔏_∘(P_0^+, P_0^+))
+ 2^N β^2
(𝔏_∘(_0 , P_0^+))
=
12 p^N
+ 12 β^2 (-1)^N β^2 p^N
= p^N
,
as expected.
§.§ Pullback trace
Finally, we consider H = (2,β) and A as in <Ref>, and treat the
invariants for lens spaces with one loop coloured by the object P_ from the pullback
ideal (cf. <Ref>).
We start by computing the values of
[^A]_P_(z) for z from the
11-dimensional centre of H
(recall (<ref>)),
and where = [ a^- a^+; b^- b^+ ].
The only non-zero values of [^A]_P_(z) for z an element from the
canonical basis (<ref>) of Z(H) are
z ∈ Z(H) _0 _1^+ _1^- _0 _2^+ _2^- _0 _1^γ_2^ε
[^A]_P_(z) 1 δ_γ, + a^ε -
δ_γ, - b^ε ,
where γ, ε∈{+, -}.
Using <Ref>, we compute
^A,r(_0^+ _1^- _1^+)
= 1/4∑_m=0^3 ^A,r(^m _1^- _1^+)
= 1/4 (1 - i β^2)
,
so that in our chosen normalisation (<ref>), the pullback
trace at P_ equals 1 on _0^+ ^+_1 ^-_1.
From <Ref>, we know that _2^+ _2^- acts
on P_ as ·_1^+ _1^-.
Similarly, from (<ref>) one finds for γ, ε∈{+, -} and p ∈ P_ that
_1^γ_2^ε p
=
(δ_γ, + a^ε - δ_γ, - b^ε) ·_1^+ _1^- p.
Likewise, _1^+ _1^- _2^+ _2^- is seen to act as 0, whence the
trace vanishes for P_0^+.
Finally, since P_ is purely even, X_1^± act as zero as well.
As above, we have a symmetric bilinear pairing (z, z')_P_ := [^A]_P_
(zz') for z, z' ∈ Z(H).
Then <Ref> shows that the only non-zero values
are
(_0, _0 _1^+ _1^-)_P_
= 1,
(_0, _0 _2^+ _2^-)_P_
= ,
and
(_0, _0 _1^γ_2^ε)_P_
= δ_γ, + a^ε - δ_γ, - b^ε .
The invariants
[^A] (𝔏_∘(z, P_))
for basis elements
z
of Z(H) as in (<ref>) are:
z [^A] (𝔏_∘(z, P_))
P_0^+, X_1^± 0
_0 -
2
p q
(1 + )
_0 _j^+ _j^- p^2
(δ_j, 1 + δ_j,2)
_0 _1^γ_2^ε p^2
(δ_γ, + a^ε - δ_γ, -
b^ε)
_0 0
The surgery coefficient is β^2σ as in the proof of <Ref>.
By <Ref> we need to compute (z,ζ_∘)_P_. The zero entries in the table are immediate from <Ref>.
Using <Ref>, we first compute
(
_0, ζ_∘)_P_ =
β^6n
p^2
∑_s ∈[Z]_2^2(- 2 q/p)^|s|(
_0,
_0 ∏_j=1^2 (^+_j ^-_j)^s_j)_P_
=
-
β^6n
2 p q
(
_0,
_0
^+_1 ^-_1
+
_0
^+_2 ^-_2
)_P_
=
-
2
β^6n
p q
(1 + )
.
Since σ = n, β^2 σ + 6n = 1.
Similarly, we have
(
_0 _j^+ _j^-,
ζ_∘)_P_ =
β^6n
p^2
∑_s ∈[Z]_2^2(- 2 q/p)^|s|(
_0 _j^+ _j^-,
_0 ∏_j=1^2 (^+_j ^-_j)^s_j)_P_
=
β^6n
p^2
(
_0 _j^+ _j^-,
_0
)_P_
=
β^6n
p^2
(δ_j, 1 + δ_j,2)
.
A completely analogous computation shows
(
_0 _1^γ_2^ε,
ζ_∘)_P_ =
β^6n
p^2
(
_0 _1^γ_2^ε,
_0
)_P_
=
β^6n
p^2
(δ_γ, + a^ε - δ_γ, -
b^ε)
.
<Ref> shows the result stated in <Ref>.
In particular,
the invariants in the above theorem manage to distinguish
all P_, by evaluating on all _0 _1^γ_2^ε.
Explicitly, the choices for α_j,l in <Ref> are
α_1,1 = _0 _1^+ _2^- , α_1,2 = _0 _1^+ _2^+ , α_2,1 = -_0 _1^- _2^- , α_2,2 = -_0 _1^- _2^+ .
§ MORE ON CONTINUED FRACTIONS
Consider the (n × n)-matrix M_a with entries
(M_a)_ij =
a_i i = j ,
1 (i < n j = i + 1) (i > 1 j = i - 1) .
This is precisely the linking matrix of the surgery link of 𝔏(p, q) as given in (<ref>), see e.g. <cit.>.
By definition, the signature σ(𝔏(p, q)) of the surgery link is the signature of the linking matrix, i.e. the number of positive minus negative eigenvalues.
Let a_1, …, a_n ∈[Z] be non-zero such that there are coprime positive
integers p_i, q_i with p_iq_i = [a_i; a_i+1, …, a_n]
for i = 1,…,n.
Then we have:
* M_a = ∏_j=1^n [a_j; a_j+1, …, a_n] .
* M_a is positive definite, and so its signature is n.
* p_1 = ∏_j=1^n [a_j; a_j+1, …, a_n]
= ∏_j=1^n [a_n-j+1; a_n-j, …, a_1] .
Part 1:
We proceed by induction on n.
For n = 1 this is obvious, and to see what is happening note that for n = 2 we
have
[a_1; a_2] · a_2
= (a_1 - 1a_2) a_2
= a_1 a_2 - 1
= [ a_1 1; 1 a_2 ] .
Assume now that the statement holds up to n-1 for a given n ≥ 2.
Setting d(a_1, …, a_n) = ∏_j=1^n [a_j; a_j+1, …, a_n], we note that
d(a_i, …, a_n) = [a_i; a_i+1, …, a_n] d(a_i+1, …, a_n).
Using the Laplace expansion for the first row, we obtain
M_a
= a_1 · M_a ∖ (a_1) - 1 ·[ 1 1 0 ⋯ 0; 0 ; ⋮ M_a ∖ (a_1, a_2) ; 0 ]
= a_1 · M_a ∖ (a_1) - M_a ∖ (a_1, a_2) ,
where by M_a ∖ b we mean the matrix associated to the list obtained by
removing entries of b from a.
By hypothesis, we know the values of the two determinants on the right hand side, and so we obtain
M_a
= a_1 · d(a_2, …, a_n) - d(a_3, …, a_n)
= d(a_2, …, a_n) ·( a_1 - 1/[a_2; …, a_n])
= d(a_2, …, a_n) · [a_1; a_2, …, a_n]
,
which equals d(a_1, …, a_n), as claimed.
Part 2: Positive definiteness now follows from the Sylvester criterion.
Namely, consider the lower right submatrices N_d, d=1,…,n of M.
That is, (N_d)_st = (M_a)_s+n-d,t+n-d and
s,t = 1,…,d.
Using the above notation, we see that N_d = M_(a_n-d+1,a_n-d+2,…,a_n).
Thus by part 1 we get
N_d = ∏_j=n-d+1^n [a_j; a_j+1, …, a_n]
= ∏_j=n-d+1^n p_jq_j,
and all factors are positive by assumption.
Part 3:
By part 1, the second
equality is simply the statement that the similar matrices
M_(a_1, …, a_n) and M_(a_n, …, a_1) have the same determinant.
To see the first equality, note first that
∏_j=1^n [a_j; a_j+1, …, a_n]
= ∏_j=1^n p_j/q_j
= p_1/q_n∏_j=1^n-1p_j+1/q_j .
Since clearly q_n = 1, this equals p_1 if q_j = p_j+1 for all 1 ≤ j < n.
Let us show that this holds.
For j < n, we have
p_j/q_j
= a_j - q_j+1/p_j+1
= a_j p_j+1 - q_j+1/p_j+1 .
Now, for any c, d, n ∈[Z] with d | n, we have gcd(n ±
c, d) ≤gcd(c, d).[
If x divides gcd(d, n ± c), then since d divides n also
x divides n, and hence x divides c.
]
Applying this to our situation, we find gcd(a_j p_j+1 - q_j+1,
p_j+1) ≤gcd(q_j+1, p_j+1) = 1, and so the above is in fact an
equality of reduced fractions.
Thus we even obtain equality of numerators and denominators up to sign.
But the sign vanishes in light of our positivity assumption; in particular, we obtain
q_j = p_j+1.
The proposition above in particular implies (<ref>) for the signature of the surgery link. The same expression (with the stronger assumption that all a_i are at least 2) was obtained e.g. in <cit.>.
DGGPR
[Ab]Abe:2005
T. Abe,
A [Z]_2-orbifold model of the symplectic fermionic vertex operator superalgebra,
10.1007/s00209-006-0048-5Mathematische Zeitschrift 255 (2007) 755–792
math/0503472math.QA.
[BBG]BBGa
A. Beliakova, C. Blanchet, A.M. Gainutdinov,
Modified trace is a symmetrised integral,
10.1007/s00029-021-00626-5Select. Math. New Ser. 27 (2021),
1801.00321math.QA.
[BC1]BC1
D. Bulacu, S. Caenepeel,
Integrals for (dual) quasi-Hopf algebras. Applications,
10.1016/S0021-8693(03)00175-3J. Algebra 266 (2003), 552–583,
math/0110063math.QA.
[BC2]BC2
D. Bulacu, S. Caenepeel,
On integrals and cointegrals for quasi-Hopf algebras,
10.1016/j.jalgebra.2011.11.006J. Algebra 351 (2012), 390–425,
1103.2263math.QA.
[BCGP]Blanchet:2014ova
C. Blanchet, F. Costantino, N. Geer, B. Patureau-Mirand,
Non semi-simple TQFTs, Reidemeister torsion and Kashaev's invariants,
10.1016/j.aim.2016.06.003Adv. Math. 301 (2016) 1–78,
1404.7289math.GT.
[BCPO]Bulacu_book
D. Bulacu, S. Caenepeel, F. Panaite, F. v.Ostaeyen,
Quasi-Hopf algebras: a categorical approach,
10.1017/9781108582780Encyclopedia Math. Appl. (2019).
[BGR1]BGR1
J. Berger, A.M. Gainutdinov, I. Runkel,
Modified traces for quasi-Hopf algebras,
10.1016/j.jalgebra.2019.12.006J. Algebra 548 (2020) 96–119,
1812.10445math.QA.
[BGR2]BGR2
Monadic cointegrals and applications to quasi-Hopf algebras,
10.1016/j.jpaa.2021.106678J. Pure Appl. Algebra 225 (2021),
2003.13307math.QA
[BK]BK
B. Bakalov, A. Kirilov Jr.,
Lectures on tensor categories and modular functors
10.1090/ulect/021Univ. Lect. Ser. 21 (2001).
[Br]Br-Css
A. Bruguières,
Tresses et structure entière sur la catégorie des représentations de sl_N quantique,
10.1080/00927870008826941Commun. in Algebra 28 (2000) 1989–2028.
[BT]BT
D. Bulacu,
B. Torrecillas,
Factorizable quasi-Hopf algebras – applications,
10.1016/j.jpaa.2004.04.010J. Pure Appl. Alg. 194 (2004) 39–84
math/0312076[math.QA/0312076].
[BV1]BV-hopfmonads
A. Bruguières, A. Virelizier,
Hopf monads,
10.1016/j.aim.2007.04.011Adv. in Math. 215 (2007), 679–733,
math/0604180math.QA.
[BV2]BV-centers
On the center of fusion categories,
10.2140/PJM.2013.264.1Pacific J. Math. 264 (2013),
1203.4180math.QA
[CDGS]CDGS
W. Cheong, A. Doser, M. Gray, S.F. Sawin,
Relationship of the Hennings and Chern-Simons Invariants For Higher Rank Quantum Groups, 1701.01423math.GT
[CG]Creutzig:2016fms
T. Creutzig, T. Gannon,
Logarithmic conformal field theory, log-modular tensor categories and modular forms,
10.1088/1751-8121/aa8538J. Phys. A: Math. Theor. 50 (2017) 404004,
1605.04630math.QA.
[CGP1]CGP
F. Costantino, N. Geer, B. Patureau-Mirand,
Quantum invariants of 3-manifolds via link surgery presentations and non-semi-simple categories,
10.1112/jtopol/jtu006Journal of Topology 7 (2014) 1005–1053,
1202.3553math.GT.
[CGP2]CGP2
F. Costantino, N. Geer, B. Patureau-Mirand,
Relations between Witten-reshetikhin-turaev and non semi-simple sl(2) 3-manifold invariants,
agt.2015.15.1363Algebr. Geom. Topol. 15 (2015) 1363–1386,
1310.2735math.GT.
[CGR]Creutzig:2017khq
T. Creutzig, A. M. Gainutdinov and I. Runkel,
A quasi-Hopf algebra for the triplet vertex operator algebra,
10.1142/S021919971950024XCommun. Contemp. Math. 22 (2020) 1950024,
1712.07260math.QA.
[CKS]CKS
Q. Chen, S. Kuppum, P. Srinivasan,
On the relation between the WRT invariant and the Hennings invariant,
10.1017/S030500410800193XMath. Proc. of the Cambridge Phil. Soc. 146 (2009) 151–163, 0709.2318math.QA
[CLR]Creutzig:2021cpu
T. Creutzig, S. Lentner and M. Rupert,
Characterizing braided tensor categories associated to logarithmic vertex operator algebras,
2104.13262math.QA.
[CYZ]CYZ12
Q. Chen, C-C. Yu, Y. Zhang, Three-manifold invariants associated with restricted quantum groups, 10.1007/s00209-011-0969-5Math. Z. 272 (2012) 987–999.
[DGGPR1]DGGPR
M. De Renzi, A.M. Gainutdinov, N. Geer, B. Patureau-Mirand, I. Runkel,
3-dimensional TQFTs from non-semisimple modular categories,
10.1007/s00029-021-00737-zSelecta Math. 28 (2022) 42,
1912.02063math.GT.
[DGGPR2]DGGPR2
Mapping Class Group Representations From Non-Semisimple TQFTs,
10.1142/S0219199721500917Commun. Contemp. Math. 25 (2023) 2150091,
2010.14852math.GT.
[DGP]DGP-renormalized-Hennings
M. De Renzi, N. Geer, B. Patureau-Mirand,
Renormalized Hennings invariants and 2+1-TQFTs,
10.1007/s00220-018-3187-8Commun. Math. Phys. 362 (2018), 855–907,
1707.08044math.GT.
[Dr]Drinfeld
V. Drinfeld,
Quasi-Hopf algebras,
Leningrad Math. J. 1 (1990), 1419–1457.
[DR]Davydov:2012xg
A. Davydov, I. Runkel,
[Z]/2[Z]-extensions of Hopf algebra module categories by their base categories,
10.1016/j.aim.2013.06.024Adv. Math. 247 (2013) 192–265,
1207.3611math.QA.
[EGNO]EGNO
P. Etingof, S. Gelaki, D. Nikshych, V. Ostrik,
Tensor categories,
10.1090/surv/205/04Math. Surveys Monogr. 205 (2015).
[EO]EO
P. Etingof, V. Ostrik,
On Semisimplification of Tensor Categories,
10.1007/978-3-030-82007-7_1in: V. Baranovsky et al. (eds) Representation Theory and Algebraic Geometry, Trends in Mathematics (2022), Birkhäuser (2022) 3–35,
1801.04409math.RT.
[Et]Etingof:2002
P. Etingof,
On Vafa’s theorem for tensor categories,
10.4310/MRL.2002.v9.n5.a8Math. Res. Let. 9 (2002) 651–657,
math/0207007math.QA.
[FGR1]FGR1
V. Farsad, A.M. Gainutdinov, I. Runkel,
SL(2,ℤ)-action for ribbon quasi-Hopf algebras,
10.1016/j.jalgebra.2018.12.012J. Algebra 522 (2019), 243–308,
1702.01086math.QA.
[FGR2]FGR2
The symplectic fermion ribbon quasi-Hopf algebra and the SL(2,[Z])-action
on its centre, 10.1016/j.aim.2022.108247Adv. Math. 400 (2022) 108247,
1706.08164math.QA.
[FGST]FGST
B.L. Feigin, A.M. Gainutdinov, A.M. Semikhatov, I.Y. Tipunin,
Modular group representations and fusion in logarithmic conformal field theories and in the quantum group center,
10.1007/s00220-006-1551-6Commun. Math. Phys. 265 (2006) 47–93,
hep-th/0504093hep-th.
[FL]Flandoli:2017xwu
I. Flandoli, S. Lentner,
Logarithmic conformal field theories of type B_n,ℓ=4 and symplectic fermions,
10.1063/1.5010904J. Math. Phys. 59 (2018) 071701,
1706.07994math.RT.
[FOG]FOG
A. Fontalvo Orozco, A.M. Gainutdinov,
Module traces and Hopf group-coalgebras,
1809.01122math.QA.
[FS]Fuchs:2010mw
J. Fuchs, C. Schweigert,
Hopf algebras and finite tensor categories in conformal field theory,
inmabb.criba.edu.ar/revuma/revuma.php?p=toc/vol51Rev. Union Mat. Argentina 51 (2010) 43–90
1004.3405hep-th].
[GK]Gaberdiel:1996np
M.R. Gaberdiel, H.G. Kausch,
A rational logarithmic conformal field theory,
10.1016/0370-2693(96)00949-5Phys. Lett. B 386 (1996) 131–137
hep-th/9606050hep-th.
[GKP1]GKP_generalized-trace-and-mod-dim-rib-cat
N. Geer, J. Kujawa, B. Patureau-Mirand,
Generalized trace and modified dimension functions on ribbon categories,
10.1007/s00029-010-0046-7B. Sel. Math. New Ser. 17 (2011), 453–404,
1001.0985math.RT.
[GKP2]GKP_ambi-objects-and-trace-functions-for-nonssi-cats
Ambidextrous objects and trace functions for nonsemisimple categories,
10.1090/S0002-9939-2013-11563-7Proc. Amer. Math. Soc. 141 (2013),
2963–2978,
1106.4477math.RT.
[GKP3]GKP_m-traces
M-traces in (non-unimodular) pivotal categories,
10.1007/s10468-021-10044-yAlgebr. Represent. Theory (2021),
1809.00499math.RT.
[GN]GN
T. Gannon, C. Negron
Quantum SL(2) and logarithmic vertex operator algebras at (p,1)-central charge,
2104.12821math.QA.
[GPT]GPT
N. Geer, B. Patureau-Mirand, V.G. Turaev,
Modified quantum dimensions and re-normalized link invariants,
10.1112/S0010437X08003795Compositio Math. 145 (2009), 196–212,
0711.4229math.QA.
[GPV]GPV_traces-on-ideals-in-pivotal-categories
N. Geer, B. Patureau-Mirand, A. Virelizier
Traces on ideals in pivotal categories,
10.4171/QT/36Quantum Topol. 4 (2013), 91–124,
1103.1660math.QA.
[GR1]Gainutdinov:2015lja
A.M. Gainutdinov, I. Runkel,
Symplectic fermions and a quasi-Hopf algebra structure on U_i sℓ(2),
10.1016/j.jalgebra.2016.11.026J. Algebra 476 (2017) 415–458,
1503.07695math.QA.
[GR2]GR-nonssi-Verlinde
The non-semisimple Verlinde formula and pseudo-trace functions,
10.1016/j.jpaa.2018.04.014J. Pure Appl. Algebra 223 (2019), 660–690,
1605.04448math.QA.
[GR3]GR-proj
Projective objects and the modified trace in factorisable finite tensor
categories,
10.1112/S0010437X20007034Compositio Math. 156 (2020) 770–821,
1703.00150math.QA.
[He]H96
M. Hennings,
Invariants of Links and 3-Manifolds Obtained from Hopf Algebras,
10.1112/jlms/54.3.594J. London Math. Soc. (2) 54 (1996), No. 3, 594–624.
[HN]HN-integrals
F. Hausser, F. Nill,
Integral theory for quasi-Hopf algebras,
math/9904164math.QA.
[LW]Laugwitz-Walton
R. Laugwitz, C. Walton,
Constructing non-semisimple modular categories with relative monoidal centers,
10.1093/imrn/rnab097Int. Math. Res. Not. IMRN (2021),
2010.11872math.QA.
[Lyu]Lyu-inv-MCG
V.V. Lyubashenko,
Invariants of 3-manifolds and projective representations of mapping class groups
via quantum groups at roots of unity
10.1007/BF02101805Commun. Math. Phys. 172 (1995), 467–516,
hep-th/9405167hep-th.
[Ka]Kausch:1995py
H.G. Kausch,
Curiosities at c = -2,
hep-th/9510149hep-th/9510149.
[Ke]Kerler-connectivity
T. Kerler,
On the connectivity of cobordisms and half-projective TQFTs,
10.1007/s002200050487Commun. Math. Phys. 198 (1998) 535–590,
q-alg/9603017math.QA.
[KL]KerlerLyubashenko
T. Kerler, V. V. Lyubashenko,
Non-semisimple topological quantum field theories for 3-manifolds with corners,
10.1007/b82618Lecture Notes in Math. 1765 (2001).
[ML]catWorkMath
S. Mac Lane,
Categories for the working mathematician,
10.1007/978-1-4757-4721-8Grad. Texts in Math. 5 (1998).
[MR]McRae
R. McRae,
Deligne tensor products of categories of modules for vertex operator algebras,
2304.14023math.QA.
[PS]Prasolov-Sossinsky
V.V. Prasolov, A.B. Sossinsky,
Knots, links, braids and 3-manifolds,
10.1090/mmono/154Transl. Math. Monogr. 154 (1997).
[RT]RT90
N. Reshetikhin, V. Turaev,
Ribbon Graphs and Their Invariants Derived From Quantum Groups,
10.1007/BF02096491Commun. Math. Phys. 127 (1990) 1–26.
[Ru]Runkel:2012cf
I. Runkel,
A braided monoidal category for free super-bosons,
10.1063/1.4868467J. Math. Phys. 55 041702 (2014)
1209.5554math.QA.
[Sch]Schauenburg_freeness
P. Schauenburg,
A quasi-Hopf algebra freeness theorem,
Proc. Amer. Math. Soc. 132 (2004) 965–972,
math/0204141math.QA.
[Sh1]Shimizu:2014
K. Shimizu,
On unimodular finite tensor categories,
10.1093/imrn/rnv394Int. Math. Res. Notices 2017 (2017) 277–322,
1402.3482math.QA.
[Sh2]Shimizu:2015
K. Shimizu,
The monoidal center and the character algebra,
10.1016/j.jpaa.2016.12.037J. Pure and Appl. Algebra 221 (2017) 2338–2371,
1504.01178math.QA.
[Sh3]Shimizu:2016
K. Shimizu,
Non-degeneracy conditions for braided finite tensor categories,
10.1016/j.aim.2019.106778Adv. Math. 355 (2019) 106778
1602.06534math.QA.
[Sh4]Sh-integrals
K. Shimizu,
Integrals for finite tensor categories,
10.1007/s10468-018-9777-5Algebr. Represent. Theory 22 (2019) 459–493,
1702.02425math.CT.
[So]Sommerhaeuser
Y. Sommerhäuser,
On the notion of a ribbon quasi-Hopf algebra,
Rev. Unión Mat. Argent. 51 (2010), 177–192,
0910.1638math.RA.
[Tu]Turaev-book
V.G. Turaev,
Quantum Invariants of Knots and 3-Manifolds,
10.1515/9783110435221de Gruyter Stud. Math. 18 (2016).
[Vi]Virelizier-Kirby
A. Virelizier,
Kirby elements and quantum invariants,
10.1112/S0024611506015905Proc. Lond. Math. Soc. 93 (2006) 474–514,
math/0312337v2math.GT
|
http://arxiv.org/abs/2307.04444v1 | 20230710095313 | Weak gravitational lensing by an ESTGB black hole in the presence of a plasma | [
"Qian Li",
"Yu Zhang",
"Zhi-Wen Lin",
"Qi-Quan Li",
"Qi Sun"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] (Corresponding author)
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
This paper is devoted to studying the weak-field gravitational lensing properties of a 4D ESTGB black hole, which is surrounded by the plasma medium. The effects of the magnetic charges and the three plasma distribution models in the deflection of light around a 4D ESTGB black hole are investigated in detail. We find that the uniform plasma leads to a larger deflection of light rays in comparison with the singular isothermal sphere (SIS), the non-singular isothermal sphere (NSIS) models. Moreover, the deflection angle increases slightly as the absolute value of the magnetic charge decreases. Finally, we analyze the total magnification of image due to weak gravitational lensing around the black hole. The result shows that the presence of a uniform plasma medium remarkably enhances the total magnification whereas the non-uniform plasma reduces the total magnification.
Weak gravitational lensing by an ESTGB black hole in the presence of a plasma
Qi Sun
==============================================================================
Keywords: Black hole, Weak graviatational lensing, Plasma
PACS numbers: 04.70.Dy, 04.50.Kd, 03.65.Xp
§ INTRODUCTION
As one of Einstein's general relativity predictions, black holes are the most mysterious objects in the present universe. Because the light ray is unable to escape the event horizon, which is a one-way causal boundary, black holes are not visible objects, and their existence can only be proven indirectly. However, with the development of related astronomical technology, the EHT cooperation organization <cit.> published the shadow of a supermassive black hole in 2019. This may be another powerful evidence of the existence of black holes after LIGO-Vigro detected the gravitational wave signals generated by the merger of binary black holes <cit.>. In addition to the standard general relativity, many modified gravity theories are proposed due to fundamental general relativity may not hold in high- or low- curvature regimes, such as the extended scalar-tensor-Gauss-Bonnet (ESTGB) theory <cit.>. It is given through the coupling of the Gauss-Bonnet invariant with a scalar field owing to avoidance of Ostrogradski instability, which is a special and interesting extension. This modified theory is a natural modification of general relativity and extension of the standard scalar-tensor theory. The Doneva and Yazadjiev indicated that below a certain critical mass, the Schwarzschild spacetime becomes unstable in ESTGB gravity <cit.>. The ESTGB theory can explain the phenomenon of the present stage of cosmic acceleration in cosmology <cit.>. Shortly thereafter, Cañate and Perez Bergliaffa <cit.> proposed the first exact magnetic black hole solution based on the extended scalar-tensor-Gauss-Bonnet theory (ESTGB) with a special type of nonlinear electrodynamics. The ESTGB black hole solution is characterized by the Arnowitt-Deser-Misner (ADM) mass and magnetic charge. When m>0 and q<0, the black hole solution is similar to the Reissner-Nordström black hole solution. The gray-body factor and absorption cross section of the massless Dirac field for this black hole were studied in Ref.<cit.>. Ma et al. <cit.> investigated the quasinormal modes and absorption cross section of the massless scalar field for this black hole. Besides, the thermodynamical properties for this black hole under the generalized uncertainty principle (GUP) have been studied in Ref.<cit.>.
Because the spacetime around compact massive objects is curved, one of the remarkable characteristics of general relativity is light deflection and the lens effect. The phenomenon of light deflection and lens effect is called gravitational lensing. One of the three well-known verification experiments for general relativity involves light deflection. Therefore, gravitational lensing is used as a special tool to verify whether the general relativity theories are correct and to probe properties of matter surrounding black hole. Besides, one can obtain some feature information of the gravitational object by the gravitational lensing. It is extremely important that the difference between different black hole lenses can be obtained by the gravitational lensing effect <cit.>. So gravitational lensing still is the very active research area in the weak and strong field limits. The weak deflection angle of Schwarzschild spacetime in vacuum can be expressed by in form α̂=2R_s/b where R_s =2M and b is the impact parameter. Virbhadra et al. studied the strong gravitational lensing in the context of Schwarzschild black hole <cit.>. The variation of the tangential, radial, and total magnification of the images with respect to the angular source position is investigated by simulating the supermassive black holes M87* as a Schwarzschild lens <cit.>. Sereno <cit.> obtained the time delay and deflection angle expressions of the Reissner-Nordström black holes under the weak field approximation. In addition, many attempts have been made on the weak deflection angle of the different modified gravity theories by using different methods <cit.>. Generally, the angle of deflection or the relevant optical scalar can be expressed in the form of derivatives of the different components of the black hole metric. In strong gravity field, the study of gravitational lensing is a trending topic. There have been a number of articles examining the gravitational lensing in the strong field <cit.>.
On the other hand, it is believed that compact astrophysical objects are immersed in a complicated environment, such as plasma. In this paper, we only focus on the plasma environment. Plasma is a dispersive medium whose refractive index relies on the frequency of photons. The plasma around compact astrophysical objects affects the trajectories of the light ray since it may interact with electromagnetic waves. Synge <cit.> firstly proposed the self-consistent approach to the propagation of light rays in the gravitational field in the context of plasma medium. Forty years later, Perlick <cit.> proposed a different type of the method to obtain the integral expression of the deflection angle as the plasma surrounds the Schwarzschild and Kerr black holes. Later, Bisnovatyi-Kogan and Tsupko <cit.> found that the deflection angle relies on the photon frequency in the uniform dispersive medium. The phenomenon has qualitatively different from the vacuum environment. The authors <cit.> also considered the case that the gravitational object is surrounded by the inhomogeneities of plasma and obtained the expression for the deflection angle of the different plasma models. Schee <cit.> et al. studied the gravitational lensing about the regular black hole immersed in plasma. The weak deflection angle of the wormhole solution described by exponential metric was obtained in Ref.<cit.>. The influences of uniform plasma on the the shadow and weak deflection angle for a rotating and regular black hole in a non-minimally coupled Einstein-Yang-Mills (EYM) theory have been studied <cit.>. Zhang et at. <cit.> studied the influences of the plasma with the power-law distribution and logarithmic normal distribution on the shadow of the Kerr black hole. In addition, Atamurotov and his coworkers were devoted to studying the weak gravitational lensing effect in plasma for various kinds of spacetimes such as the Lorentzian wormhole spacetime <cit.>, Schwarzschild-MOG black hole <cit.>, 4D Einstein-Gauss-Bonnet gravity <cit.>, rotating Einstein-Born-Infeld black hole <cit.>.
In this study, we focus on the exact expression of the deflection angle for the (3+1)-dimensional ESTGB black hole assuming that the black hole is immersed in a plasma medium. And as an application, we will study the magnification of image in the weak field. The structure of this paper is as follows. Section <ref> presents a brief review of the process of obtaining the deflection angle under the weak-field approximation and calculating the deflection angle for the 4-dimensional ESTGB black hole, which is surrounded by three different plasma density distributions. In Section <ref>, as a type of application, we study the magnification of image for three different plasma density distributions, i.e., uniform plasma, SIS and NSIS medium. Finally, we give our concluding remarks in Section <ref>.
Throughout, our choice of a spacetime signature is {-,+,+,+} and natural units c = G = ħ = 1. Latin indices run from 1 to 3 as well as Greek denotes from 0 to 3.
§ WEAK-FIELD LENSING IN THE PRESENCE OF PLASMA
In this section, we will study optical properties, namely, gravitational lensing which is in the context of a 4D ESTGB black hole encompassed by the plasma medium under the weak-field approximation.
The 4D ESTGB gravity with an extra matter field, namely a model of non-linear electrodynamics (NLED), has the following action <cit.>
S = ∫ d^4x √(-g){1/4π(1/4(R - 1/2∂_μϕ∂^μϕ + f(ϕ) R__GB^2 -2 U (ϕ))-ℒ_ matter) }.
Here the first term is the Einstein-Hilbert Lagrangian density, which is defined by the Ricci scalar R, the kinetic term of the scalar field 1/2∂_μϕ∂^μϕ, the non-minimal coupling between the Gauss-Bonnet invariant R__GB^2 and scalar field f(ϕ), i.e., f(ϕ) R__GB^2, and the scalar field potential U (ϕ). The Lagrangian density ℒ_ matter denotes any matter field in the action. Concretely, the Gauss-Bonnet invariant satisfies the form R__GB^2=R_αβμν^αβμν - 4 R_αβR^αβ+R ^2. The function f(ϕ) and the scalar field potential U (ϕ) can be expressed as
f=-ℓ^2σ/32{√(2σ)tan^^-1( √(2)/√(σ).06cm ϕ) +1/2ϕln[( 2β/σϕ^2+β)^2] - 2/ϕ},
𝒰(ϕ)=2^^9/2/105ℓ^2σ^7/2[π/2-tan^^-1(√(2)/√(σ).06cm ϕ)]ϕ^5/4ℓ^2(3/10σ+5ϕ^2/7+7σϕ^4/24) ln[( 2β/σϕ^2+β)^2]
-ϕ/3ℓ^2(16/35σ^3-8ϕ^2/105σ^2+31ϕ^4/70σ+11ϕ^6/28).
The NLED Lagrangian term that reduces to Maxwell's electrodynamics in the weak field regime has the following form
ℒ_𝒩ℒℰ𝒟=ℱ/8-s^^1/2( 1 +37/210σ_∗+2/525σ_∗)ℱ^^5/4 - σ_∗ s ℱ^^3/2/16
+𝒪(ℱ^^7/4),
with the electromagnetic invariant ℱ=q^2/r^4. And the above parameters have the relations σ=σ_*, l=s=q, β=β_* and ϕ(r)=q/r.
The metric describing the 4D ESTGB black hole can be written as
ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dθ^2+r^2 sin^2θ dϕ^2,
with
f(r)=1-R_s/r-q^3/r^3,
where R_s=2M, M is ADM mass and q is magnetic charge.
Since the weak energy condition (WEC) should be satisfied by both the corresponding effective energy-momentum tensor and that of nonlinear electrodynamics, the value of q<0 is permitted. Without losing generality, we consider the case that is a non-extreme black hole. This means that the value of the magnetic charge is limited to this range -2^5/3/3 < q < 0 when M is set to 1.
We know that photons will follow the null geodesics of the effective spacetime metric in the presence of NLED instead of the original spacetime metric. However, we need to state that the metric describing the 4D ESTGB-NLED spacetime is obtained in the weak field where the NLED reduces to Maxwell's theory (see Ref. <cit.> for more detail). Therefore, photons still follow the null geodesics of the original spacetime metric in the weak field.
Now, a general approach <cit.> is introduced to derive the deflection angle in the uniform or non-uniform plasma. We have the metric coefficients under the weak field approximation, which are given by
g_αβ=η_αβ + h_αβ,
where η_αβ is the Minkowski metric, i.e., (-1,1,1,1), h_αβ is perturbation metric. Note that
h_αβ≪ 1, h_αβ→ 0 where x^α→∞,
g^αβ=η^αβ-h^αβ, h^αβ=h_αβ.
The refractive index of the static inhomogeneous plasma that relies on the photon frequency ω(x^i) and space location x^α has the following form
n^2=1-ω^2_e/ω^2(x^i), ω^2_e=4π e^2 N(r)/m=K_e N(r),
where ω_e is the electron plasma frequency, N(r) is the electron density in the inhomogeneous plasma, e and m denote the charge and mass of the electron, respectively. It is worth noting that when ω_e< ω the electromagnetic waves can propagate in the such plasma. That is to say, the plasma medium has a reflective medium effect when ω_e< ω where ω(∞)≡ω.
Considering the effect of the plasma on the deflection angle in the weak field limit, we get the expression of deflection angle in the following form
α̂_k=1/2∫_-∞^∞(h_33,k+ h_00,k/1-ω^2_e/ω^2-K_eN_,k/ω^2-ω_e^2)dz,
for k=1,2. The deflection angle with the impact parameter b found in Ref.<cit.> for more detail, can be written as
α̂_k=1/2∫_-∞^∞b/r×(dh_33/dr+1/1-ω^2_e/ω^2dh_00/dr-K_e/ω^2-ω_e^2dN/dr)dz.
The location of the photon is presented by b and z under the axially symmetric case, and then the magnitude of the radius-vector is written as r=√(b^2+z^2) <cit.>. It is worth noting that the negative value of α̂_b indicates the bending of the photon trajectory towards the compact object, and the positive value indicates the opposite.
In the weak gravitational field regime, we can rewrite the metric around the 4D ESTGB black hole as
ds^2=ds_0^2+(R_s/r+q^3/r^3)(dt^2+dr^2),
where ds^2_0 is the flat part of metric, and it has the following form
ds^2_0=-dt^2+dr^2+r^2(dθ^2+sin^2θ dϕ^2).
The components h_αβ can be expressed in the Cartesian frame as
h_00=R_s/r+q^3/r^3, h_ik=h_00n_in_k,
h_33=h_00cos^2χ,
where cosχ=z/√(b^2+z^2) and r=√(b^2+z^2).
By substituting Eq.(<ref>) into Eq.(<ref>), we have the concrete form of the deflection angle in the following expression <cit.>
α̂_b=∫_-∞^∞b/2r(∂_r((R_s/r+q^3/r^3)cos^2χ)
+∂_r(R_s/r+q^3/r^3)1/1-ω^2_e/ω^2-K_e/ω^2-ω^2_e∂_rN)dz.
In what follows, we will calculate the integrals about the deflection angle considering the three specific plasma distributions, viz., uniform plasma, singular isothermal sphere (SIS), and non-singular isothermal sphere (NSIS) medium.
§.§ Uniform plasma
In the subsection, we will calculate the deflection angle using Eq.(<ref>) for the photon propagating in the 4D ESTGB spacetime surrounded by uniform plasma, which can be expressed as
α̂_uni=α̂_uni1+α̂_uni2+α̂_uni3.
The first term is the influence of the gravitational field of the ESTGB black hole
α̂_uni1=∫_-∞^∞b/2r∂_r(R_s/r^3+q^3/r^5)z^2dz = -R_s/b-2q^3/3b^3.
Note that when q=0 the spacetime will recover to the Schwarzschild spacetime, and we will obtain α̂_uni1=R_s/b. The second term includes the influence of the gravitational field and plasma medium, which can be written as
α̂_uni2=∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)1/1-ω^2_e/ω^2dz
=-(R_s/b+q^3/b^3)1/1-ω^2_e/ω^2.
Because the last term is the influence of the inhomogeneity of plasma, we get ∂_rN=0 for uniform plasma.
In the relevant literature about weak gravitational lensing, the deflection angle is usually defined as a positive one <cit.>. Thus, we have the following expression about the uniform plasma
α̂_uni=R_s/b+2q^3/3b^3+(R_s/b+2q^3/b^3)1/1-ω^2_0/ω^2,
where ω_0=ω_e(∞).
In Fig.<ref>, we plot the deflection angle α̂_b with respect to the impact parameter b for different values of magnetic charge q at ω_0^2/ω^2=0.5, and plasma medium parameter at q=-0.5. The deflection angle diminishes with an increase in the
impact parameter b. As can be seen from Fig.<ref>, when b≫ R_s, we can neglect the effect of the magnetic charge on the deflection angle. In addition, it is easy to see from Eq.(<ref>), the deflection angle is very small or even disappear when the impact parameter b is large. Fig.<ref> demonstrates the dependence of the deflection angle from the uniform plasma parameter and magnetic charge at b=3. We can see in the left figure that the deflection angle increases rapidly when ω_0^2/ω^2 increases to 1. As the absolute value of magnetic charge decreases, the deflection angle slightly increases.
§.§ Singular isothermal sphere
In the subsection, we consider the case of an SIS around the 4D ESTGB black hole. The SIS is primarily introduced in Refs.<cit.> and <cit.> to study the lens systems of the galaxies and clusters of galaxies. The density distribution of the SIS is written as
ρ(r)=σ_v^2/2 π r^2,
where v is the one-dimensional velocity dispersion. We can obtain the plasma concentration by making use of Eq.(<ref>) and the following relation
N(r)=ρ(r)/κ m_p,
in which κ is a coefficient which is related to the contribution of dark matter, called by 1D coefficient, and m is the mass of proton. The plasma frequency has the expression
ω^2_e=K_eN(r)=K_eσ^2_v/2πκ m_pr^-2.
Using Eq.(<ref>), we can calculate the deflection angle for an SIS. Due to the fact that the first term is the effect of the gravitational field, it has the same expression as Eq.(<ref>)
α̂_sis1=α̂_uni1.
For the other terms, we calculate the integrals and obtain the following results
α̂_sis2 =∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)(1+ω^2_e/ω^2)dz
=-((R_s/b+2q^3/b^3)+(2R_s/3π b+8q^3/5π b^3)ω^2_cR_s^2/ω^2 b^2),
α̂_sis3=-K_eb/2ω^2∫_-∞^∞1/rdN(r)/drdz=ω_c^2R_s^2/2ω^2b^2,
where ω_c^2 is defined as <cit.>
ω_c^2=K_eσ^2/2κ m_p R^2_s.
We obtain the deflection angle about the SIS, which can be written as
α̂_sis=((2R_s/b+8q^3/3b^3)+(-1/2+2R_s/3π b+8q^3/5π b^3)ω_c^2R_s^2/ω^2b^2).
To simulate the effect of SIS on the trajectory of light, we demonstrate the deflection angle α̂ versus the impact parameter b for different values of magnetic charge when ω_c^2/ω^2 is set to 0.5, and the SIS parameter for fixed q=-0.5 in Fig.<ref>. It's not hard to get that when we increase the impact parameter the deflection angle decreases. Fig.<ref> is the visualization of deflection angle to SIS parameter and magnetic charge, respectively. It is straightforward to show that the deflection angle diminishes when ω_c^2/ω^2 increases (left figure), however, when the absolute value of magnetic charge decreases the deflection angle increases (right figure). This means that the existence of a SIS around the black hole reduces the deflection angle in comparison to the vacuum or uniform cases.
§.§ Non-singular isothermal sphere
In the subsection, we aim to give the exact expression of the deflection angle of the ESTGB black hole in the presence of the NSIS. The plasma distribution can be expressed as <cit.>
ρ(r)=σ^2_v/2π(r^2+r_c^2),
where r_c is the core radius, and the concentration becomes
N(r)=σ^2/2πκ m_p(r^2+r_c^2).
The corresponding plasma frequency has the following form
ω_e^2=K_eσ^2_v/2πκ m_p(r^2+r_c^2).
Similarly to the last subsection, the first term remains unchanged, and other terms of Eq.(<ref>) will have the expressions
α̂_nsis2 =∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)(1+ω^2_e/ω^2)dz
=-(R_s/b+q^3/b^3)-(R_s/bπ r_c^2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2))
×ω_c^2R^2_s/ω^2-(-1/b^2 r_c^4 +2/3 b^4 r_c^2 +arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))×3 q^3 b R_s^2ω_c^2/ω^2π,
α̂_nsis3=-K_eb/2ω^2∫_-∞^∞1/rdN(r)/drdz=b/2(b^2+r_c^2)^3/2ω_c^2R^2_s/ω^2,
where
ω_c^2=K_eσ^2_v/2κ m_p R^2_s.
One can obtain the following form of the deflection angle by summing all the integrals
α̂_nsis=(2R_s/b+8q^3/3b^3)+(R_s/bπ r_c^2-b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2))ω_c^2R^2_s/ω^2
+(-1/b^2 r_c^4 +2/3 b^4 r_c^2arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b R_s^2ω_c^2/ω^2π.
The variation of the deflection angle α̂_b with the impact parameter b is shown in Fig.<ref>, where the ESTGB-NLED black hole is surrounded by NSIS medium. From Fig.<ref>, we can conclude that the increase of the impact parameter leads to the diminishing of deflection angle. And we can see from the right panel that the difference in the deflection angle becomes more and more obvious with an increase in the impact parameter for the different values of the NSIS medium. In Fig.<ref>, we plot the dependence of the deflection angle on the NSIS parameter for the different magnetic charges (left panel) and on the magnetic charge for the different NSIS parameters (right panel). In these two cases we fix b=3 and r_c=3. The effect of NSIS on the deflection angle is similar to that of the SIS case by comparing Figs.<ref> and <ref>.
In the above three subsections, we studied the effect of the different distributions of the plasma and magnetic charge on the deflection angle in detail. To directly compare the effects of different plasmas, i.e., uniform plasma, SIS, and NSIS media, we study the dependence of the deflection angle on different parameters. The comparison results are shown in Fig.<ref> where we fix the corresponding parameters, viz., ω_0^2/ω^2=ω_c^2/ω^2=0.5, impact parameter b=3 and the core radius r_c=3. The uniform plasma medium exhibits better refraction properties than the SIS and NSIS models, as shown in Fig.<ref>. It is easy to see that the magnetic charge has a small effect on the deflection angle of the black holes in different plasma distributions. We also notice from the right figure that when we increase the plasma parameter ω_0^2/ω^2 or ω_c^2/ω^2, the deflection angle in the presence of the SIS or NSIS medium diminishes, whereas the deflection angle of the uniform plasma has the opposite trend. Finally, the deflection angle decreases with the increase of the impact parameter for the three models. In a word, the bending degree of deflection can be expressed mathematically as, α̂_uni > α̂_sis> α̂_nsis.
§ MAGNIFICATION OF IMAGE
In this section, we will analyze in detail the magnification of image for the ESTGB black hole in the presence of the different plasma using the formula of the deflection angle studied in our previous section. The lens equation has the form <cit.>
θ D_s=β D_s+α̂_b D_d s,
where D_s is the distance from the observer to the distant light source, and D_d s is the distance from the lens object to the distant light source (see Fig.<ref>). θ denotes the angle of the apparent source image for the observer lens axis, β denotes the angle of the light source with respect to the observer lens axis, and α̂_b is the angle between the apparent source image and light source, i.e., deflection angle. We make use of the relationship between the impact parameter and angle θ, and θ possesses the expression b=D_dθ where D_d is the distance from the lens object to the observer, to rewrite the expression (<ref>), into the form <cit.>
β =θ-D_ds/D_sF(θ)/D_d1/θ,
and
F(θ)=|α̂_b|b=|α̂_b(θ)|D_dθ.
Note that when the light source, lens object, and observer remain in a straight line, the angle β is equal to zero. In such a case, the relativistic image will form a relativistic ring known as an Einstein ring. The radius of the Einstein ring R_0=D_dθ_0, where θ_0 denotes the Einstein angle. The Einstein angle in the context of the Schwarzschild black hole can be expressed as <cit.>
θ_0=√(2R_sD_ds/D_dD_s).
The Einstein angle θ_0 is small but can be solved with modern telescopes. However, we can detect the gravitational lensing owing to the changes in the apparent brightness of the source, namely magnification of the image brightness. The basic equation of the magnification of the image brightness is expressed as <cit.>
μ_Σ=I_tot/I_*=∑_k|(θ_k/β)(dθ_k/dβ)|, k=1,2,...,s,
where I_tot and I_* refer to the total brightness of the image and unlensed brightness of the pure source, respectively. k is the number of the images and s is the total number of the images.
Next, we will study the effect of the different distribution plasma around the ESTGB black hole on the magnification of the images.
§.§ Uniform plasma
We first calculate the expression of the Einstein angle θ^pl_0 in the context of the uniform plasma. We have the form by using Eqs.(<ref>) and (<ref>) as follows
(θ^pl_0)_uni=θ_0{1/2((1+2q^3/3R_sb^2)+(1+2q^3/R_s b^2)1/1-ω_0^2/ω^2)}^1/2.
We obtain the magnification of image by bring the above Eq.(<ref>) into Eq.(<ref>), which is given by <cit.>
μ_tot^pl=μ_+^pl+μ_-^pl=x^2+2/x√(x^2+4).
Here μ_+ is the magnification factor of the primary image, which is located on the same side of the light source with respect to the lens object <cit.>
μ_+=1/4[x/√(x^2+4)+√(x^2+4)/x+2],
and μ_- is the magnification factor of the secondary image, which is situated on the opposite side
μ_-=1/4[x/√(x^2+4)+√(x^2+4)/x-2],
where x denotes the dimensionless parameter in the presence of the uniform plasma. It has the following form
x_uni=β/(θ^pl_0)_uni=x_0{1/2((1+2q^3/3R_sb^2)+(1+2q^3/R_s b^2)1/1-ω_0^2/ω^2)}^-1/2,
with x_0=β/θ_0.
For a better understanding of the effect of the magnetic charge and plasma on the magnification of image, in Fig.<ref>, we plot the variation of the total magnification of image with the magnetic charge for the different values of the uniform plasma parameter (left figure) and the uniform plasma parameter for the different values of magnetic charge (right figure) for fixed R_s=2, b=3 and x_0=0.055. We can see that the total magnification exhibits a small increase as the absolute value of magnetic charge decreases and reaches a maximum when it returns to the Schwarzschild black hole. It is easy to see from the right panel that the total magnification increases exponentially with the increase of uniform plasma distribution. In other words, the existence of uniform plasma usually increases the magnification. Besides, we also plot the ratios μ_+^pl/μ_+ (lower curves) and μ_-^pl/μ_- (upper curves) of the magnification with the given parameters q=-0.5, b=3 and R_s=2 in Fig.<ref>, for more details about the effect of the plasma on the magnification. It is evident that when the value of the uniform plasma density distribution increases, the magnification ratio increases. The behavior of the magnification ratio of the image brightness corresponds to the fact that the deflection angle is increased by ω_0^2/ω^2. In addition, the magnification ratio of the secondary image μ_-^pl/μ_- becomes larger, while the magnification ratio of the primary image μ_+^pl/μ_+ tends to unity when x increases.
§.§ Singular isothermal sphere
We have calculated the deflection angle for the case that the 4D ESTGB black hole surrounded by the uniform plasma in the last subsection. So in the subsection, we consider the influence of the SIS on the total magnification and the magnification ratio of image brightness. The expression of the Einstein angle θ^pl_0 in the context of the SIS medium can be expressed as
(θ^pl_0)_sis=θ_0{1/2((2+8q^3/3R_sb^2)+(-1/2+2R_s/3π b+8q^3/ 5π b^3) R_sω_c^2/b ω^2)}^1/2.
Since the calculational part is similar, we have x in the presence of the SIS plasma medium, which has the following form
x_sis=β/(θ^pl_0)_sis=x_0{1/2((2+8q^3/3R_sb^2)+(-1/2+2R_s/3π b+8q^3/ 5π b^3) R_sω_c^2/b ω^2)}^-1/2,
where x_0=β/θ_0.
Fig.<ref> shows the changes in the total magnification of image as the function of the magnetic charge (left figure) for the different parameter values of the SIS parameter, and the SIS parameter (right figure) for the different values of the magnetic charge where corresponding fixed parameters are b=3, x_0=0.055 and R_s=2. From Fig.<ref>, we can see that when we increase the SIS medium, the total magnification decreases gradually.
Because the plasma density decreases with the radius (dN/dr<0), α̂_sis3 is negative which is opposite to the gravitational deflection (see Refs.<cit.> and <cit.>). If α̂_sis3 is positive, the total magnification of image as the function of ω_c^2/ω^2 has the opposite direction (see Refs.<cit.>). Fig.<ref> demonstrates the magnification ratio, i.e., the primary image μ_+^pl/μ_+ (lower curves) and the secondary image μ_-^pl/μ_- (upper curves) in the case we fix the parameters as q=-0.5, b=3 and R_s=2. Because the effect of the SIS medium, the behavior of the magnification ratio is opposite to that of the uniform plasma.
§.§ Non-Singular isothermal sphere
In this subsection, we focus on the total magnification and the magnification ratio of image brightness for the ESTGB black hole surrounded by the NSIS medium. The Einstein angle θ_0^pl can be written as
(θ^pl_0)_nsis =θ_0{1/2((2+8q^3/3b^2R_s)+(R_s/bπ r_c^2-b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2))
×ω_c^2R_s b/ω^2 +(-1/b^2 r_c^4 +2/3 b^4 r_c^2+arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b^2 R_sω_c^2/ω^2π)}^1/2.
The dimensionless parameter x has the form
x_nsis =β/(θ^pl_0)_nsis
=x_0{1/2((2+8q^3/3b^2R_s)+(R_s/bπ r_c^2- b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2))
×ω_c^2R_s b/ω^2+(-1/b^2 r_c^4 +2/3 b^4 r_c^2+arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b^2 R_sω_c^2/ω^2π)}^-1/2,
where x_0=β/θ_0.
In Fig.<ref>, we show the graph of the total magnification for the case that the black hole is surrounded by the NSIS medium. By analyzing the behavior shown in Fig.<ref>, one can see that the change is similar to the case of the singular isothermal sphere. The presence of a NSIS reduces the total amplification in comparison with vacuum circumstance, i.e., ω_c^2/ω^2=0. This is because α̂_nsis3 is negative. We also plot the changes of the magnification ratio of the primary and secondary images with fixed R_s=2, b=3, x_0=0.055 and r_c=3 in Fig.<ref>. It is observed that μ_-^pl/μ_- (upper curves) tends to unity as larger x. And the ratio μ_+^pl/μ_+ (lower curves) is less than 1.
We compare the magnification ratio of image brightness of the Schwarzschild black hole and ESTGB black hole in the uniform plasma in Fig.<ref>. We see that at large x the ratio of the magnification μ_+^pl/μ_+ tends to unity for the Schwarzschild black hole and ESTGB black hole; the ratio of the magnification μ_-^pl/μ_- of the Schwarzschild black hole tends to a constant, 2.25. This is consistent with the results of Bisnovatyi-Kogan et al.<cit.>. In addition, the magnetic charge has slight influence on the magnification ratio of the image.
To compare the effects of the different plasma models on magnification ratio of image brightness, in Fig.<ref> we plot the magnification ratio of the three plasma distributions, i.e., uniform, SIS and NSIS, with the same parameters q=-0.5, b=3, R_s=2, ω_0^2/ω^2=ω_c^2/ω^2=0.5 and r_c=3.
We can obtain from Fig.<ref> that as a consequence of the non-uniform plasma distribution around the black hole, the magnification ratio of the non-uniform plasma is less than that of uniform plasma. This means that only when there is uniform plasma around the black hole, the observer in the distance will perceive a considerable magnification.
§ CONCLUSION AND DISCUSSION
In the work, we discussed the weak gravitational lensing properties of a 4D ESTGB black hole immersed in different plasma distribution models. We studied in detail the effect of the different plasma distribution models, i.e., uniform, SIS and NSIS medium, and the magnetic charge on the deflection of light. We found that the deflection angle increases slightly with the decrease of the absolute values of the magnetic charge. That is, the black hole has the maximum deflection angle when it returns to the Schwarzschild black hole. We showed that the presence of uniform plasma leads to an increase in the deflection angle. However, due to the fact that α̂_sis3 (α̂_nsis3) caused by the plasma inhomogeneity is less than zero , the deflection angle of the non-uniform plasma medium slightly diminishes with the increase of the plasma parameter. Moreover, compared with the SIS model, we found that the deflection angle is more sensitive to parameters b and ω_c^2/ω^2 in the NSIS model. We investigated the total magnification of image due to the weak gravitational lensing effect around a plasma-surrounded black hole. We observed that the change of the total magnification is similar to that of the deflection angle. In other words, for the uniform plasma model, the magnification of image increases, while for SIS or NSIS model, the magnification of image decreases. This result is also indicated by the magnification ratio of the image source.
Finally, according to the influence of three plasma models on the deflection angle and the magnification of image, we can qualitatively understand the uniform plasma as a concave lens, while the SIS and NSIS plasma models as a convex lens in the context of the refractive index n<1.
§ ACKNOWLEDGMENTS
This work was supported partly by the National Natural Science Foundation of China (Grant No. 12065012), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (Grant No. YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006).
99
Akiyama2019
K. Akiyama et al. [Event Horizon Telescope],
Astrophys. J. Lett. 875, L1 (2019).
LIGOScientific:2016aoc
B. P. Abbott et al. [LIGO Scientific and Virgo],
Phys. Rev. Lett. 116, 061102 (2016).
Doneva:2018rou
D. D. Doneva, S. Kiorpelidi, P. G. Nedkova, E. Papantonopoulos and S. S. Yazadjiev,
Phys. Rev. D 98, 104056 (2018).
Doneva:2017bvd
D. D. Doneva and S. S. Yazadjiev,
Phys. Rev. Lett. 120, 131103 (2018).
Heydari-Fard:2016nlj
M. Heydari-Fard, H. Razmi and M. Yousefi,
Int. J. Mod. Phys. D 26, 1750008 (2016).
Canate:2020kla
P. Cañate and S. E. Perez Bergliaffa,
Phys. Rev. D 102, 104038 (2020).
Li:2022jda
Q. Li, C. Ma, Y. Zhang, Z. W. Lin and P. F. Duan,
Chin. J. Phys. 77, 1269-1277 (2022).
Ma:2022gzr
C. Ma, Y. Zhang, Q. Li and Z. W. Lin,
Commun. Theor. Phys. 74, 065402 (2022).
Lin:2022eix
Z. W. Lin, Y. Zhang, Q. Li, C. Ma and P. F. Duan,
Int. J. Theor. Phys. 61, 199 (2022).
Eiroa:2005ag
E. F. Eiroa,
Phys. Rev. D 73, 043002 (2006).
Wei:2011bm
S. W. Wei and Y. X. Liu,
Phys. Rev. D 85, 064044 (2012).
Virbhadra:1999nm
K. S. Virbhadra and G. F. R. Ellis,
Phys. Rev. D 62, 084003 (2000).
Virbhadra:2022iiy
K. S. Virbhadra,
Phys. Rev. D 106, 064038 (2022).
Sereno:2003nd
M. Sereno,
Phys. Rev. D 69, 023002 (2004).
Jusufi:2017vta
K. Jusufi, A. Ovgün and A. Banerjee,
Phys. Rev. D 96, 084036 (2017).
Ovgun:2018oxk
A. Övgün,
Universe 5, 115 (2019).
Li:2020wvn
Z. Li, G. Zhang and A. Övgün,
Phys. Rev. D 101, 124058 (2020).
Fu:2021akc
Q. M. Fu, L. Zhao and Y. X. Liu,
Phys. Rev. D 104, 024033 (2021).
Javed:2020pyz
W. Javed, J. Abbas, Y. Kumaran and A. Övgün,
Int. J. Geom. Meth. Mod. Phys. 18, 2150003 (2021).
Javed:2021arr
W. Javed, A. Hamza and A. Övgün,
Universe 7, 385 (2021).
Li:2021xhy
Z. Li and J. Jia,
Phys. Rev. D 104, 044061 (2021).
Crisnejo:2019xtp
G. Crisnejo, E. Gallo and J. R. Villanueva,
Phys. Rev. D 100, 044006 (2019).
Crisnejo:2019ril
G. Crisnejo, E. Gallo and K. Jusufi,
Phys. Rev. D 100, 104045 (2019).
Jha:2021eww
S. K. Jha, S. Aziz and A. Rahaman,
Eur. Phys. J. C 82, 106 (2022).
Virbhadra:2002ju
K. S. Virbhadra and G. F. R. Ellis,
Phys. Rev. D 65, 103004 (2002).
Rahvar:2018nhx
S. Rahvar and J. W. Moffat,
Mon. Not. Roy. Astron. Soc. 482, 4514-4518 (2019).
Bozza:2010xqn
V. Bozza,
Gen. Rel. Grav. 42, 2269-2300 (2010).
Virbhadra:2008ws
K. S. Virbhadra,
Phys. Rev. D 79, 083004 (2009).
Chen:2013vja
S. Chen and J. Jing,
Class. Quant. Grav. 30, 175012 (2013).
Ji:2013xua
L. Ji, S. Chen and J. Jing,
JHEP 03 (2014), 089 (2014).
Chen:2015cpa
S. Chen and J. Jing,
JCAP 10, 002 (2015).
Chen:2016hil
S. Chen, S. Wang, Y. Huang, J. Jing and S. Wang,
Phys. Rev. D 95, 104017 (2017).
Zhang:2017vap
R. Zhang, J. Jing and S. Chen,
Phys. Rev. D 95, 064054 (2017).
Abbas:2019olp
G. Abbas, A. Mahmood and M. Zubair,
Chin. Phys. C 44, 095105 (2020).
Abbas:2021whh
G. Abbas, A. Mahmood and M. Zubair,
Phys. Dark Univ. 31, 100750 (2021).
Hensh:2021nsv
S. Hensh, J. Schee, A. Abdujabbarov and Z. Stuchlík,
Eur. Phys. J. Plus 137, 242 (2022).
Synge:1960ueh
J.L. Synge, Relativity: the general theory (1960).
Perlick2000
V. Perlick, Ray optics, Fermat's principle, and applications to general relativity (Springer Science & Business Media, 2000).
Bisnovatyi-Kogan:2008qbk
G. S. Bisnovatyi-Kogan and O. Y. Tsupko,
Grav. Cosmol. 15, 20-27 (2009).
Bisnovatyi-Kogan:2010flt
G. S. Bisnovatyi-Kogan and O. Y. Tsupko,
Mon. Not. Roy. Astron. Soc. 404, 1790-1800 (2010).
Schee:2017hof
J. Schee, Z. Stuchlík, B. Ahmedov, A. Abdujabbarov and B. Toshmatov,
Int. J. Mod. Phys. D 26, 1741011 (2017).
Turimov:2022iff
B. Turimov, Y. Turaev, B. Ahmedov and Z. Stuchlík,
Phys. Dark Univ. 35, 100946 (2022).
Kala:2022uog
S. Kala, H. Nandan and P. Sharma,
Eur. Phys. J. Plus 137, 457 (2022).
Zhang:2022osx
Z. Zhang, H. Yan, M. Guo and B. Chen,
Phys. Rev. D 107, 024027 (2023).
Atamurotov:2021byp
F. Atamurotov, S. Shaymatov and B. Ahmedov,
Galaxies 9, 54 (2021).
Atamurotov:2021qds
F. Atamurotov, A. Abdujabbarov and J. Rayimbaev,
Eur. Phys. J. C 81, 118 (2021).
Babar:2021exh
G. Z. Babar, F. Atamurotov and A. Z. Babar,
Phys. Dark Univ. 32, 100798 (2021).
Babar:2021nst
G. Z. Babar, F. Atamurotov, S. Ul Islam and S. G. Ghosh,
Phys. Rev. D 103, 084057 (2021).
Hensh:2019ipu
S. Hensh, A. Abdujabbarov, J. Schee and Z. Stuchlík,
Eur. Phys. J. C 79, 533 (2019).
Atamurotov:2021hoq
F. Atamurotov, A. Abdujabbarov and W. B. Han,
Phys. Rev. D 104, 084015 (2021).
S1958
S. Chandrasekhar and S. Chandrasekhar, An introduction to the study of stellar structure (Courier Corporation, 1957).
J1987
J. Binney and S. Tremaine, Galactic dynamics (Princeton university press, 2011).
Morozova
V. S. Morozova, B. J. Ahmedov and A. A. Tursunov, Astrophys. Space Sci. 346, 513-520 (2013).
Bisnovatyi-Kogan:2015dxa
G. S. Bisnovatyi-Kogan and O. Y. Tsupko,
Plasma Phys. Rep. 41, 562 (2015).
|
http://arxiv.org/abs/2307.05951v1 | 20230712063439 | Magnetic control of orientational order and intrinsic hydrodynamic instability in bacterial turbulence | [
"Kazusa Beppu",
"Jaakko V. I. Timonen"
] | cond-mat.soft | [
"cond-mat.soft",
"physics.bio-ph"
] |
Department of Applied Physics, Aalto University School of Science, Puumiehenkuja 2, Espoo, 02150, Finland
Department of Applied Physics, Aalto University School of Science, Puumiehenkuja 2, Espoo, 02150, Finland
Highly concentrated active agents tend to exhibit turbulent flows, reminiscent of classical hydrodynamic turbulence, which has attracted considerable attention lately. Controlling the so-called active turbulence has long been a challenge, and the influence of external fields on such chaotic self-organization remains largely unexplored. Here we report on active turbulence of Bacillus subtilis bacteria controlled by a uniform magnetic field via a magnetizable medium based on magnetic nanoparticles. The rod-shaped bacteria act as non-magnetic voids in the otherwise magnetic medium, allowing magnetic torques to be generated on their bodies. This leads to an externally controllable nematic alignment constraint that further controls bacterial turbulence into a nematic state. The nematic orientational ordering in the direction parallel to the magnetic field is accompanied by transverse flows owing to active stress by dipole pushers, which induce undulation of the nematic state. Remarkably, the typical length of the undulation is almost independent of the magnetic field strength. Our theoretical model based on the hydrodynamic equations for suspensions of self-propelled particles predicts the intrinsic length scale of hydrodynamic instability independent of the magnetic field. Our findings suggest that magnetic torques are a powerful approach for controlling both individual agents and their collective states in active systems.
Magnetic control of orientational order and intrinsic hydrodynamic instability in bacterial turbulence
Jaakko V. I. Timonen
August 12, 2023
======================================================================================================
Collections of autonomous motile elements which convert energy locally into mechanical motion, the so-called active matter, tend to display a rich variety of collective motion and self-organization<cit.>. Intriguingly, such active systems often exhibit self-sustained turbulent flows consisting of transient vortices and jets, named active turbulence, reminiscent of classical hydrodynamic turbulence<cit.>. Paradigmatic examples range from biological systems, e.g., swimming bacteria<cit.>, eukaryotic cells<cit.>, and sperm cells<cit.>, to non-biological systems, e.g., synthetic colloidal particles<cit.>. Although in the regime of low Reynolds numbers, active turbulence is self-organized by local energy injection from their constituent elements. Due to its ubiquity, active turbulence has attracted great attention during the last two decades. However, controlling the active turbulence has turned out to be challenging. While several efforts have focused on controlling the turbulence through boundary conditions via static geometric walls<cit.>, the highly desired ability to control all active particles in the bulk of the liquid using, e.g., external fields remains largely unexplored and unachieved.
In this article, we show a method for exerting uniform torques on large populations of non-spherical active agents by immersing them in a magnetic liquid medium and generating the torque with an external magnetic field. We demonstrate this with Bacillus subtilis 3610, archetypical bacteria forming active turbulent states. We show that the swimming orientation and nematic ordering can be controlled magnetically not only in dilute but also in dense, turbulent suspensions. We show further that the nematically aligned dense suspension is unstable as the active stresses from the bacteria induce orientational undulation. The length scale of the undulation is independent of the magnetic field strength, providing conclusive experimental evidence of the intrinsic hydrodynamic instability in bacterial turbulence that is further supported by a two-field continuum model.
§ RESULTS
The magnetically controllable bacterial suspensions were prepared by mixing B. subtilis cultured in Terrific Broth (TB) medium with polyethylene glycol (PEG) stabilized ferrofluid (Ferrotec, PBG300). The volumetric concentration ϕ of the ferrofluid was varied from 0.02 to 0.10 (i.e., from 2% to 10%). The presence of the magnetic nanoparticles, stabilizing agent, or dilution of the TB medium did not affect the mean swimming velocity of the bacteria (Fig. S1). Furthermore, as desired, the nanoparticles were observed not to attach to the bacteria (Fig. S2). The suspensions were studied under a uniform magnetic field generated by a horizontal Helmholtz coil (Fig. <ref>a).
§.§ Magnetic control of the alignment of individual bacteria
At low bacterial densities (c_0 ≈ 1∼2e7cells^3) and in the absence of magnetic field, the bacteria swim at speeds of ∼20 in an isotropic manner as expected. When a uniform magnetic field is applied, the bacteria align with the field in a nematic manner, and the nematic order increases with increasing magnetic field strength (Fig. <ref>b and Supplementary Videos 1 and 2), which can be characterized by the velocity orientation distribution of tracked bacteria (Fig. <ref>c).
The degree of the nematic alignment can be quantified by using the nematic order parameter
S = ⟨cos2(θ_j(t)-θ_B) ⟩_j,t
where θ_j(t) denotes the orientation of the velocity (v_b) of jth bacterium at time t, θ_B = π is the direction of the magnetic field, and ⟨·⟩_j,t indicates ensemble average over all tracked bacteria and temporal durations of the tracks (10) measured after the new steady-state has been reached after turning on the magnetic field. As shown in Fig. <ref>d, the nematic order increases with the magnetic field strength and approaches unity at ϕ = 0.1 already under a modest field strength of 10 mT.
To clarify the mechanism of the magnetic orientation, we analyzed the bacterial body orientations, n_b = (cosθ^',sinθ^'), by using ellipsoidal fits (Fig. <ref>e). The cell body orientation distribution is well fitted by Aexp[-βsin^2(θ^')] where A is a normalization factor, and β is a fitting parameter, suggesting that the bacterial body orientations follow the alignment mechanism described by the nematic potential, -sin^2(θ^') (see Materials and Methods for details). This potential can be derived from the interaction of the uniform magnetic field with the effective magnetic moment of a non-magnetic void in the ferrofluid<cit.>. The fitting parameter corresponds to the inverse of the orientational fluctuation, i.e., β=1/⟨δθ^'2⟩ that we found to increase almost linearly with the magnetic field strength (Fig. <ref>f)<cit.>.
At the highest magnetic field strengths and longest durations of observation, the magnetic nanoparticles start to form chains as expected from dipolar forces (Fig. <ref>b and Supplementary Video 2). However, the chaining occurs much more slowly than the directional change in bacterial orientation. Once the magnetic field is turned
off, the nanoparticle chains redisperse quickly into a homogeneous isotropic dispersion.
§.§ Magnetic control of bacterial turbulence
At high bacterial densities c_0 ≈6e10cells^3 the active turbulence appeared in the magnetic bacteria suspensions similarly as in the regular non-magnetic suspension (Figs. <ref>a,b). The addition of the magnetic nanoparticles did not significantly change the collective velocity or the intrinsic vortex structure, compared to the control experiments done without magnetic nanoparticles (Figs. S3a,b). In contrast to the dilute samples studied in glass capillaries, the dense suspensions were investigated as thin films ∼60 high with a large liquid-air interface to allow the B. subtilis bacteria to access oxygen to maintain their motility (Fig. <ref>a). When a magnetic field was applied, the disordered turbulent state was maintained at low field strengths, while in stronger magnetic fields the bacteria aligned with the direction of the applied field (Fig. <ref>b and Supplementary Video 3). This is in contrast to the dilute system (Fig. <ref>), where the nematic alignment begins to increase even at the lowest magnetic fields (Fig. <ref>d), suggesting that the active turbulent states can resist some degree of external torque.
Peculiarly, additional transverse flows appeared in high magnetic fields (Fig. <ref>c and Supplementary Video 4). The time series of the nematic order parameters for the bacterial orientation and collective velocity are plotted in Fig. <ref>d. When the magnetic field is off, both order parameters are close to zero. However, at high field strengths, the order parameter for the bacterial orientation sharply increases up to about unity, whereas that for the collective velocity drops down to about -0.3. When the field is turned off again, both values approach zero, and in turn, the turbulent state is recovered – demonstrating the ability to switch the system between nematic order and active turbulence magnetically. The tendencies of nematic ordering and transverse flows were found to increase monotonically with the magnetic field strength (Fig. <ref>e).
In order to quantify the ordered structures in the emergent patterns, we define the correlation function of bacterial orientation as follows:
C_n(Δ r) = ⟨⟨cos2(θ( r+Δ r,t)-θ( r,t)) ⟩_ r⟩_t
where Δ r is a distance, and ⟨·⟩_ r,t denotes an ensemble average in space and time (averaged over 10 seconds). To clarify the anisotropic structural details in the orientational ordering, we decompose the correlation into Δ x and Δ y components (Figs. <ref>a,b). Depending on the magnetic field, these correlations display extension of correlated area in both parallel (x) and perpendicular (y) directions. The orientational correlation can be fitted by
G(|Δ r|) = (1 - a)e^-(|Δ r|/l)^b + a
where l, a, and b are adjustable fitting parameters <cit.>. The parameter l denotes the coherence length that corresponds to the average nematic domain length and is plotted for each of the directions parallel and perpendicular to the applied field as a function of the magnetic field strength in Fig. <ref>c. While the coherence length in the x direction is kept almost constant over the magnetic field, that in the y direction shows a gradual increment.
Importantly, a slight yet wavy profile appears in the x direction. To extract the wavelength of the bending, we define a characteristic length λ as the first local minimum value (Figs. <ref>c,h). The length λ_n,x is nearly constant as is the coherence length l_n,x. The quantity a stands for the degree of order, and b for the distribution width of l (Fig. <ref>d). The nematic order monotonically increases in both x and y directions (Fig. <ref>d top and bottom), consistent with Fig. <ref>d. On the other hand, the width of the distribution of l increases in the x direction but decreases in the y direction, indicating that the horizontal nematic domain becomes more prominent along with the magnetic field strength. Together, the above observations suggest that the nematic order over the entire region is enhanced by the magnetic field, but there exists a wavelike orientational structure with a constant length scale along the magnetic field direction in the nematically ordered state.
To quantify the characteristics of the flow fields, we analyzed the normalized velocity correlation in space defined as
C_v(Δ r) = ⟨⟨ v( r+Δ r,t)· v( r,t) ⟩_ r/⟨ v( r,t)· v( r,t) ⟩_ r⟩_t
where the ensemble average is carried out as above. Figures <ref>e and f show the velocity correlation in the x and y directions, respectively. Since the flow field includes vortical structures with clockwise and anti-clockwise handedness, the correlation function takes a local minimum, and hence we can obtain characteristic lengths λ_v,x and λ_v,y in both directions, as with λ_n,x. The insets in Figs. <ref>e and f indicate the B-dependence of the characteristic lengths, which is consistent with those of bacterial orientation. Notably, the length λ_v,x is comparable to λ_n,x, suggesting that the periodic longitudinal flow is attributed to the periodic undulating orientation field of bacteria.
Furthermore, we analyzed the temporal normalized correlation function of velocity defined as follows: C_v(Δ t) = ⟨v̂( r,t+Δ t)·v̂( r,t) ⟩_ r,t, where v̂ is a unit vector of the velocity field, and the ensemble average is taken as above. By fitting it with an exponential function, exp[-Δ t/τ], we can obtain a typical correlation time (Fig. <ref>g). In a weak or intermediate magnetic field, the time τ_v is approximately 1 and corresponds to a typical lifetime of turbulent vortices<cit.>, but in the strong magnetic field, it increases by a factor of two due to the long persistence of the nematic ordered phase.
§.§ Turning the active turbulence on and off magnetically: intrinsic hydrodynamic instability
Since the external magnetic field and torque can be switched on and off quickly, our magnetic approach allows probing transitions between different collective states. Such control is impossible using, e.g., geometric boundaries that are stationary<cit.>. In the case of the nematically aligned dense B. subtilis population, turning off the magnetic field leads to the rapid growth of the underlying minute undulation (Figs. <ref>a-d and Supplementary Video 5).
The analysis of the spatial velocity correlations as a function of time shows that the stripe-like correlation becomes more prominent at short time scales (< 0.5) after the magnetic field is turned off (Fig. <ref>e). At longer time scales (> 1), the full-fledged active turbulence is recovered.
As shown in Refs.<cit.>, swimming bacteria like B. subtilis are pusher-type microswimmers that exert force dipoles on their surrounding fluid<cit.>, and thus the active stress is known to induce orientational undulation, mediated by fluid flows. Such a self-amplifying bending deformation is an important feature of extensile active nematic systems<cit.>. Taking our cue from the activity-induced hydrodynamic instability demonstrated in those earlier works, we investigate whether the transverse flows we observed stem from the active stress by pushers by considering the 2D Stokes equation governing the fluid flow velocity u( r,t)<cit.>:
-μ∇^2 u + ∇ p + α u = -f_0∇· nn
where μ is the viscosity coefficient, the pressure p is the Lagrange multiplier for the incompressibility condition (∇· u = 0), α term is the effective friction with the substrate, and the right-hand side of the equation represents the active force with the coefficient f_0 (f_0>0 for pushers) determined by the bacterial orientation field n( r,t). We assume the uniformity of density distribution of bacteria over space even when a magnetic field is applied due to the sufficiently high concentration. Accordingly, the strength of active force f_0 = qc_0, where q and c_0 are the strength of dipoles and the number density of bacteria, respectively, is assumed to be constant. As shown in Figs. <ref>a,b, we calculated the fluid velocity field u( r,t) from the instantaneous orientation field n( r,t) by following equation (<ref>), which is similar to the collective velocity from PIV analysis v( r,t). This suggests that the emergent transverse flow originates from active stress. The optimal parameters in equation (<ref>) are searched in a manner proposed in ref.<cit.>. We introduce a parameter Q defined as Q(μ/α, f_0/α) = ⟨| u( r,t) - v( r,t)|^2/| v( r,t)|^2 ⟩_ r,t, where ⟨·⟩_ r,t denotes the spatial and time averages, in order to quantify the extent to which the calculated flow field u( r,t) coincides with the experimentally obtained v( r,t). The minimum value of Q, averaged for 0.5 after the magnetic field (B = 28.2 mT) is off, yields the optimal parameter set (μ/α, f_0/α) = (397^2, 686^2) (Fig. <ref>c).
Bacterial turbulence is, in general, known to be well reproduced by single-field models, which are justified as long as the active forces are sufficiently small compared to viscous ones<cit.>. In order to quantitatively illustrate the undulation mechanism of nematic-aligned state by fluid flows that active stress induces (Fig. <ref>h and Fig. <ref>d), however, we here describe the system using the two-field continuum model with reference to earlier works (Refs.<cit.>). The evolution equation of the bacterial orientation field n( r,t) can be given in the following:
∂ n/∂ t + ( u + V_0 n)·∇ n = D_n∇^2 n + ( I - nn)·(γ E+ W)· n
+ J_B( h· n) h·( I- nn).
On the left-hand side, the bacterial alignment is advected by u + V_0 n where V_0 is the swimming speed of bacteria. On the right-hand side, the first term denotes the diffusion with the coefficient D_n through nematic interactions due to the rod-like shape of bacteria. The second term reorients the bacterial alignment via fluid flow velocity by solvent strain tensor E = (∇^T u + ∇ u^T)/2 and vorticity one W = (∇^T u - ∇ u^T)/2, with a shape parameter -1≤γ≤1. As discussed in the case of dilute suspensions (Fig. <ref>), bacteria are aligned with respect to the magnetic field, and hence we incorporate the nematic magnetic torque with the strength J_B and the unit vector of the field h in the last term.
To understand the undulation instability and its magnetic controllability, we here analyzed the linear stability of a nearly aligned state along the magnetic field direction. The presence of the magnetic field that forces bacteria to align in parallel to the x-y plane allows us to simply consider a two-dimensional system, where the magnetic field direction is set to h=(1, 0), and only perturbations in the y direction: n( r,t) = e_x + n_⊥ (| n_⊥|≪1), where e_x=(1, 0) and e_x· n_⊥ = 0 (Fig. <ref>d). Fourier transformation of the orientation field, n_⊥( r) = ñ( k)e^i k· r + σ t, yields the real part of the growth rate along the x direction (see Materials and Methods for details):
Re[σ] = f_0k^2/2(α + μ k^2)(γ + 1) - D_nk^2 - J_B.
As for the characteristic lengths of undulation, we can obtain a typical wavenumber that maximizes the obtained growth rate (<ref>) by differentiating it with respect to k:
k_c = (-D_n + √(D_nF)/D_nM)^1/2,
where F = f_0(1+γ)/(2α), M = μ/α, and we set γ to 0.9 due to the rod shape of bacteria<cit.>. In the experiment, we extract the characteristic length λ_n,x from the local minimum point in the velocity correlation at 0.5 after the magnetic field is off, which is λ_n,x = 55.3. This corresponds to 2π/k_c, thus giving the estimation D_n = 17.3^2. Using the parameters estimated above, the growth rate is mapped against k and J_B in Figs. <ref>e,f. At a small J_B, the presence of a positive regime of σ in a long wavelength suggests that a nematic-aligned state becomes unstable, resulting in undulation instability. In contrast, at a large J_B, the unstable mode of orientation perturbations disappears, as the growth rate is suppressed by J_B. The transition point above which a nematic-aligned state is stable is given by
J_B ≥1/M(√(F) - √(D_n))^2.
J_B indicates the strength of nematic alignment by the magnetic field. Based on the microscopic description (see Materials and Methods for details), J_B can be given by J_B = 2Γϵ_1 B^2 where Γ represents the response of bacterial orientation to magnetic fields and ϵ_1 is a magnetic constant. Given that √(2Γϵ_1/D_θ) (mT^-1) where D_θ is the rotational diffusion coefficient corresponds to the slope of the inverse orientation fluctuation (0.347 mT^-1) at a 10% ferrofluid concentration in Fig. <ref>f, one can estimate a value of 2Γϵ_1 using D_θ = 0.066^2 obtained from an independent experiment (Fig. S3). Taken together, the transition point B_c can be estimated to be B_c ≈ 12 mT, which is approximately a point at which the nematic order parameter measured in the experiment exceeds 0.5 (Fig. <ref>e and Fig. <ref>d). Importantly, our model assumes the instability from a nearly nematic-aligned state, which is not exactly the behavior observed experimentally, i.e., the transition from bacterial turbulence to the nematic state. This possible discrepancy motivated us to further confirm the transition behavior from a nematic state to bacterial turbulence by implementing the temporal change in the magnetic field (Fig. S5). Programming the linear decrease in the magnetic field strength from B = 28.2 mT to 0 allowed for a gradual transition from an aligned state to a turbulent state. We found that the transition point appears to be still at B = 10 ∼ 20 mT, as in the case of Fig. <ref>e.
§ DISCUSSION AND CONCLUDING REMARKS
In summary, we have demonstrated the ability to control the swimming direction and collective states of non-magnetic bacteria by exerting externally controllable torques on them through a biocompatible and magnetizable liquid medium. In dilute bacterial suspensions, the torque can constrain the alignment of the rod-shaped bacteria and their swimming direction to the direction of the magnetic field, leading to externally tunable nematic ordering. In dense suspensions, the magnetic torque sculpts the isotropic active turbulent state to a nematically ordered state – however, flows perpendicular to the magnetic field also appear. The nematically aligned configuration is undulated with a characteristic length scale that is almost independent of the applied magnetic field. We put forward a simple continuum model and show that linear stability analysis leads to a characteristic length of the undulation that is independent of the magnetic field, in good agreement with the experimental observation.
Our results suggest that externally controllable torque generation on non-magnetic bacteria via a magnetizable medium is a powerful tool to control both individual bacteria as well as their collective states. In contrast to other techniques such as optical<cit.> or acoustic tweezers<cit.>, our approach leads to the generation of uniform torques everywhere in large samples of up to cm-scales. We foresee that this will be especially important as large-scale patterns often appear in active systems. In contrast to studies with magnetotactic bacteria<cit.>, in our approach, the magnetic nanoparticles are outside non-magnetic bacteria. This will likely allow generalization to all bacterial species and even other micro-organisms with different approaches to motilities and collective states. Finally, dynamic and non-uniform magnetic fields are foreseen to lead to even more advanced spatiotemporal control of both individual bacteria and their collective states via additional translational forces<cit.> and programmable time-dependent magnetic fields and torques<cit.>.
45
ramaswamy Ramaswamy, S. The Mechanics and statistics of active matter. Annu. Rev. Condens. Matt. Phys. 1, 323 (2010).
Vicsek Vicsek, T. & Zafeiris, A. Collective motion. Phys. Rep. 517, 71-140 (2012).
marchetti Marchetti, M. C. et al. Hydrodynamics of soft active matter. Rev. Mod. Phys. 85, 1143 (2013).
Alert Alert, R., Casademunt, J. & Joanny, J.-F. Active turbulence. Annu. Rev. Condens. Matter Phys. 13, 143-170 (2022).
Dombrowski Dombrowski, C. et al. Self-concentration and large-scale coherence in bacterial dynamics. Phys. Rev. Lett. 93, 098103 (2004).
Wensink Wensink, H. H. et al. Meso-scale turbulence in living fluids. Proc. Natl. Acad. Sci. USA 109, 14308-14313 (2012).
Patteson Patteson, A. E., Gopinath, A. & Arratia, P. E. The propagation of active-passive interfaces in bacterial swarms. Nat. Commun. 9, 5373 (2018).
Li Li, H., Shi, X.-q., Huang, M. & Zhang, H. P. Data-driven quantitative modeling of bacterial active nematics. Proc. Natl. Acad. Sci. USA 116, 777-785 (2019).
Peng Peng, Y., Liu, Z. & Cheng, X. Imaging the emergence of bacterial turbulence: Phase diagram and transition kinetics. Sci. Adv. 7, eabd1240 (2021).
aranson Aranson, I. S. Bacterial active matter. Rep. Prog. Phys. 85, 07660 (2022)
Mercader Blanch-Mercader, C. et al. Turbulent dynamics of epithelial cell cultures. Phys. Rev. Lett. 120, 208101 (2018).
Lin Lin, S.-Z. et al. Energetics of mesoscale cell turbulence in two-dimensional monolayers. Commun. Phys. 4, 21 (2021).
Creppy Creppy, A. et al. Turbulence of swarming sperm. Phys. Rev. E 92, 032722 (2015).
Nishiguchi Nishiguchi, D. & Sano, M. Mesoscopic turbulence and local order in Janus particles self-propelling under an ac electric field. Phys. Rev. E 92, 052309 (2015).
Wioland1 Wioland, H. et al. Confinement stabilizes a bacterial suspension into a spiral vortex. Phys. Rev. Lett. 110, 268102 (2013).
Wioland2 Wioland, H., Woodhouse, F. G., Dunkel, J. & Goldstein, R. E. Ferromagnetic and antiferromagnetic order in bacterial vortex lattices. Nat. Phys. 12, 341-345 (2016).
Wioland3 Wioland, H., Lushi, E. & Goldstein, R. E. Directed collective motion of bacteria under channel confinement. New J. Phys. 18, 075002(2016).
Beppu1 Beppu, K. et al. Geometry-driven collective ordering of bacterial vortices. Soft Matt. 13, 5038-5043 (2017).
dogic Wu, K.-T. et al. Transition from turbulent to coherent flows in confined three-dimensional active fluids. Science, 355, eaal1979 (2017).
Nishiguchi2 Nishiguchi, D., Aranson, I. S., Snezhko, A. & Sokolov, A. Engineering bacterial vortex lattice via direct laser lithography. Nat. Commun. 9, 4486 (2018).
hardouin Hardouin, J. et al. Active microfluidic transport in two-dimensional handlebodies. Soft Matt. 16, 9230-9241 (2020).
Beppu2 Beppu, K. et al. Edge current and pairing order transition in chiral bacterial vortices. Proc. Natl. Acad. Sci. USA 118, e2107461118 (2021).
Wang Wang, S. et al. Magnetic manipulation and assembly of nonmagnetic colloidal rods in a ferrofluid. Langmuir 37, 1429-1437 (2021).
Cvetko Cvetko, M., Ambrožič, M. & Kralj, S. Memory effects in randomly perturbed systems exhibiting continuous symmetry breaking. Liq. Cryst. 36, 33–41 (2009).
Ramaswamy2 Simha, R. A. & Ramaswamy, S. Hydrodynamic fluctuations and instabilities in ordered suspensions of self-propelled particles. Phys. Rev. Lett. 89, 058101 (2002).
Shelley Saintillan, D. & Shelley, M. J. Instabilities and pattern formation in active particle suspensions: kinetic theory and continuum simulations. Phys. Rev. Lett. 100, 178103 (2008).
Zhou Zhou, S., Sokolov, A., Lavrentovich, O. D. & Aranson, I. S. Living liquid crystals. Proc. Natl. Acad. Sci. USA 111, 1265-1270 (2014).
Genkin Genkin, M. M., Sokolov, A., Lavrentovich, O. D. & Aranson, I. S. Topological defects in a living nematic ensnare swimming bacteria. Phys. Rev. X 7, 011029 (2017).
Turiv Turiv, T. et al. Polar jets of swimming bacteria condensed by a patterned liquid crystal. Nat. Phys. 16, 481–487 (2020).
Drescher Drescher, K., Dunkel, J., Cisneros, L. H. & Goldstein, R. E. Fluid dynamics and noise in bacterial cell–cell and cell–surface scattering. Proc. Natl. Acad. Sci. USA 108, 10940-10945 (2011).
Chandrakar Chandrakar, P. et al. Confinement controls the bend instability of three-dimensional active liquid crystals. Phys. Rev. Lett. 125, 257801 (2020).
Reinken Reinken, H., Klapp, S. H. L., Bär, M. & Heidenreich, S. Derivation of a hydrodynamic theory for mesoscale dynamics in microswimmer suspensions. Phys. Rev. E 97, 022613 (2018).
Wolgemuth Wolgemuth, C. W. Collective swimming and the dynamics of bacterial turbulence. Biophys. J. 95, 1564-1574 (2008).
Reinken2 Reinken, H., Heidenreich, S., Bär, M. & Klapp, S. H. L. Anisotropic mesoscale turbulence and pattern formation in microswimmer suspensions induced by orienting external fields. New J. Phys. 21, 013037 (2019).
clement Vincenti, B. et al. Magnetotactic bacteria in a droplet self-assemble into a rotary motor. Nat. Commun. 10, 5082 (2019).
Rasmussen Rasmussen, M. B., Oddershede, L. B. & Siegumfeldt, H. Appl. Environ. Microbiol. 74, 2441-2446 (2008).
Takatori Takatori, S. C., Dier, R. D., Vermant, J. & Brady, J. F. Acoustic trapping of active matter. Nat. Commun. 7, 10694 (2016).
Jaakko Timonen, J. V. I. et al. Trapping, manipulation, and crystallization of live cells using magnetofluidic tweezers. Nanoscale Horiz. 2, 50-54 (2017).
Jaakko3 Reyes Garza, R. et al. Magnetic Quincke rollers with tunable single particle dynamics and collective states. Sci. Adv. 9, eadh2522 (2023).
Janosi Jánosi, I. M., Kessler, J. O. & Horváth, V. K. Onset of bioconvection in suspensions of Bacillus subtilis. Phys. Rev. E 58, 022613 (1998).
Jaakko2 Cherian, T. et al. Electroferrofluids with nonequilibrium voltage-controlled magnetism, diffuse interfaces, and patterns. Sci. Adv. 7, eabi8990 (2021).
Ershov Ershov, D. et al. TrackMate 7: integrating state-of-the-art segmentation algorithms into tracking pipelines. Nat. Methods. 19, 829–832 (2022).
Schindelin Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods. 9, 676-682 (2012).
Thielicke Thielicke, W. & Stamhuis, E. J. PIVLab – Towards user-friendly, affordable and accurate digital particle image velocimetry in MATLAB. J. Open Res. Softw. 2, e30 (2014).
Rezakhaniha Rezakhaniha, R. et al. Experimental investigation of collagen waviness and orientation in the arterial adventitia using confocal laser scanning microscopy. Biomech. Model. Mechanobiol. 11, 461-473 (2012).
Peruani Peruani, F., Deutsch, A. & Bär, M. A mean-field theory for self-propelled particles interacting by velocity alignment mechanisms. Eur. Phys. J. Spec. Top. 157, 111-122 (2008).
§ MATERIALS AND METHODS
§.§ Preparation of bacterial suspensions in magnetizable media
Bacillus subtilis strain 3610 (Bacillus Genetic Stock Center, Original code: NCIB3610, BGSCID: 3A1) was grown by inoculating a single bacterial colony into 5 of Lysogeny broth (LB) medium (created by dissolving a 25 LB broth base powder (Invitrogen, 12795027) in 1 MilliQ water and autoclaving) in a 50 Falcon tube (Falcon, 352070) and incubating it overnight on an orbital shaker (Grant-bio, PSU-10i) inside an incubator (Memmert, IN55) at 37. Next day, 100 of the culture was added to 20 of fresh Terrific Broth (TB) medium (Gibco, 11632139) and further incubated on the orbital shaker in the incubator at 37 at 160 RPM for ca. 3 hours. After the optical density of the culture, measured at 600 nm in a plastic cuvette with 10 mm path length using a VIS spectrometer (Thermo Scientific, GENESYS 30), reached ca. 0.4, the culture was diluted ca. 10 to 40 times in TB medium to create a dilute bacterial suspension. The dense bacterial suspensions were created by centrifuging (Fisher Scientific, GT R1 Centrifuge) the bacterial suspension (obtained after 3 hours of incubation) at 3000 RPM at room temperature for 5 minutes to increase the density of the suspension to ∼ 20% v/v at which bacterial turbulence can be observed. The bacterial concentration was estimated from the optical density measured at 600 using the relationship that OD = 1.0 at 600 corresponds to c_0 ≈7e8cells^3<cit.>. Finally, the dilute and concentrated bacterial suspensions were mixed with a biocompatible ferrofluid (Ferrotec, PBG300) at a desired proportion: 2%, 5%, or 10%. According to the specifications and physical properties of PBG300 provided by Ferrotec, the viscosity at 27 is 3, and hence we expect the viscosity of the suspension with ferrofluid to be ca. 1 even at a 10% ferrofluid concentration in the absence of magnetic fields, which is the almost same as that of the TB medium.
§.§ Sample preparation for microscopy
The dilute bacterial samples were imaged inside rectangular, 50 mm long glass capillaries 4 mm wide and 0.2 mm deep (with wall thickness 0.2 mm, CM scientific, 3524-050). For the dense bacterial samples, a well was constructed on a regular microscopy glass slide using a thermoplastic ionomer film (DuPont Surlyn, 60) that was attached to the glass slide by heating to 130 on a hotplate. After filling the well, a 500 thick spacer with one 2 diameter well (Invitrogen P18174, 10276972) was placed around the well and covered from the top with a glass coverslip to prevent the suspension from evaporating. Since the B. subtilis bacteria require lots of oxygen, the bacterial motility is highly maintained only within a few tens of microns from the water-air interface, thus allowing us to consider the system to be quasi-two-dimensional.
§.§ Microscopy under a uniform magnetic field
Imaging of the samples under a uniform horizontal magnetic field was done using a setup described earlier with small modifications<cit.>. Briefly, the uniform horizontal field was created using an electromagnet coil pair (GMW 11801523 and 11801524) with a 50 mm gap. The coils were driven with a DC power supply (BK Precision 9205). Microscopy imaging of the sample was done using a custom-made microscope setup consisting of a 20 × objective lens with a numerical aperture of 0.45 (Nikon, TU Plan Fluor), an Epi-Illuminator Module (Thorlabs, CSE2200), and a 0.5 × Camera Tube (Thorlabs, WFA4102) equipped with a CMOS camera (XIMEA, MC050MG-SY-UB). The sample was illuminated from below in transmitted light configuration using a light-emitting diode light source (Thorlabs, MNWHL4) with a collimator which is diffused right before it hits the sample. The images were acquired at 30 frames per second with an exposure time of 33.
§.§ Data processing and analysis
The trajectories of individual swimming bacteria were tracked with Particle Tracking Velocimetry (PTV) analysis by using the TrackMate plugin <cit.> in Fiji<cit.>. Prior to PTV analysis, raw images from the experiments were post-processed by subtracting the background, enhancing the contrast, smoothing by a Median filter, and inverting the image intensity histogram. Immotile bacteria, such as those adsorbed on the glass capillary surface, were eliminated from the trajectories tracked for over 2 by accepting only trajectories that showed ballistic motion with mean square displacement in the regime of 1 proportional to t^c with c greater than 1.5. In addition, to reduce the noise in the velocity determination, the accepted trajectories were smoothed by taking the difference between two data points along the trajectory separated by 10 frames (corresponding to the typical time for a bacterium to move one cell's body length), instead of considering neighboring data points. These analyses, with the exception of the tracking, were conducted in Matlab (MathWorks) using custom-made scripts.
The velocity field of the bacterial collective motion was obtained using Particle Image Velocimetry (PIV) analysis done using the PIVLab toolbox<cit.> in Matlab. The Wiener2 denoise filter was used in the image pre-processing, and the interrogation window size was chosen to be 16 × 16 pixels^2 (5.52 ×5.52^2), corresponding to the typical body length of the studied bacteria. To reduce the noise, the acquired velocity field was further smoothed by averaging over 1, which corresponds to the typical lifetime of turbulent vortices (Fig. <ref>g).
The local orientation of the bacteria was obtained by using OrientationJ plugin<cit.> in ImageJ that detects the direction of the largest eigenvector of the structure tensor of the image. We set the local window size of the structure tensor to 15 × 15 pixels (corresponding to 5). The orientation field was further coarse-grained and reduced to the same matrix size as that of the PIV velocity field after the convolution with half of the PIV window size.
The flow velocity field u in equation (<ref>) was computed by a spectral method that first solves in Fourier space and then transforms to real space.
§ THEORETICAL MODEL
§.§ Orientational dynamics of bacteria in a ferrofluid
Bacteria suspended in a ferrofluid under the application of magnetic fields create rod-shaped voids in which magnetic moments anti-parallel to the field are induced<cit.>. Their interactions nematically reorient bacteria along the magnetic field. The orientational dynamics of bacteria in a dilute suspension can be described by
d θ/dt = -Γ∂ U_m/∂θ + √(2D_θ)η(t).
The first term is the magnetic torque with the response coefficient Γ of bacterial orientation to the magnetic field and the dimensionless magnetic potential U_m = ϵ_1 B^2sin^2θ + ϵ_2 B^2 where ϵ_1,2 are magnetic constants dependent on the magnetic susceptibility, the permeability of vacuum, and the volume of the bacterium<cit.>. The second term is the white Gaussian noise with the rotational diffusion coefficient that satisfies ⟨η(t)⟩=0 and ⟨η(t)η(t^')⟩=δ(t-t^') where δ(t-t^') is a Dirac-delta function. Note that tumbling occurs at most once during the observation time of 10, and its effect is considered small enough on the time scale of the observation time to be ignored here for simplicity. According to the conventional procedure upon the assumption that the distribution of bacterial orientation is spatially uniform<cit.>, one can obtain the Fokker-Planck equation of the probability distribution of bacteria heading θ:
∂ P/∂ t = D_θ∂^2 P/∂θ^2 + ∂/∂θ(Γ∂ U_m/∂θ P).
The solution in the steady state can be written as
P(θ) = Aexp(-Γ/D_θϵ_1 B^2sin^2θ)
where A denotes a normalization factor. This function is fitted to the probability distribution in Fig. <ref>e. Since this is interpreted as the Gaussian function around θ=0, as shown in Fig. <ref>f, we analyzed the orientation fluctuation by comparing e^-θ^2/(2⟨δθ^2⟩) with equation (<ref>). Thus, 1/⟨δθ^2⟩ corresponds to 2Γϵ_1 B^2/D_θ.
To estimate the rotational diffusion coefficient D_θ, we tracked swimming trajectories in a dilute bacterial suspension with a 10% ferrofluid concentration using PTV analysis. Here, due to the difficulty in tracking bacterial polarity for a long time, let us use the velocity unit vector instead of bacterial orientation, which could be justified by the fact that polarity and velocity are almost identical in the absence of interactions with others. We analyzed mean square angular displacement, defined by ⟨(d(Δ t)-d(0))^2⟩ where d(Δ t) denotes a unit vector of swimming velocity Δ t after an initial condition d(0) (Fig. S4). The ensemble average ⟨·⟩ was taken for all the bacteria which were tracked for over 10 and exhibited straight trajectories with mean curvature radii of more than 700 to rule out bacteria showing apparent curved trajectories due to their chirality and hydrodynamic interactions with interfaces. Fitting it by 2(1-exp(-D_θΔ t)) yields the estimate of D_θ = 0.066^2<cit.>.
§.§ Linear stability analysis
To reveal the undulation instability, we analyzed the linear stability of a uniformly aligned state along the magnetic field factor J_B = J_B h with the coefficient J_B related to the magnetic field strength and the direction h = (1, 0). We consider perturbations for the orientation field in the y direction: n( r,t) = e_x + n_⊥ (| n_⊥|≪1) where e_x=(1, 0) and e_x· n_⊥ = 0. We also define perturbations for the fluid flow velocity and pressure, u( r) = u^' (| u^'|≪1), and p( r) = η (η≪1), respectively. Substituting these variables for equations (<ref>) and (<ref>), and retaining first-order terms of the infinitesimal quantities, equations (<ref>) and (<ref>) are readily reduced to the following equations:
-μ∇^2 u^' + ∇η + α u^' = -f_0∇· ( n_⊥ e_x + e_x n_⊥),
∂ n_⊥/∂ t + V_0 e_x·∇ n_⊥ = D_n∇^2 n_⊥ + ( I - e_x e_x)·(γ E^'+ W^')· e_x
- J_B n_⊥.
Upon the incompressibility condition (∇· u^' = 0), Fourier transformation of the fluid flow velocity, u^'( r) = ũ( k)e^i k· r, gives the solution of the fluid flow velocity in Fourier space:
ũ( k) = -if_0/α + μ k^2( I - k k/k^2)·( n_⊥ e_x + e_x n_⊥)
where k = | k|. We assume the direction of k to be an elevation angle φ, i.e., k = (kcosφ, ksinφ). Substitution of the velocity (<ref>) for equation (<ref>) and Fourier transformation of the orientation field, n_⊥( r) = ñ( k)e^i k· r + σ t, gives the growth rate:
σ = f_0k^2/2(α + μ k^2)cos2φ(γcos2φ + 1) - D_nk^2 - J_B - ikV_0cosφ.
The real part of the growth rate in the x direction (φ = 0) reads equation (<ref>). In addition, the real part of the growth rate in the y direction (φ = π/2) is negative throughout the wavenumber. This means that as long as the system is nearly aligned state in the direction parallel to the magnetic field, the alignment is stable in the transverse direction, which is consistent with the increment in the characteristic lengths in the transverse direction beyond the transition point of the magnetic field (Fig. <ref>c).
§.§ Comparison of magnetic torques between microscopic and continuum discreptions
To derive the specific expression of the coefficient of the magnetic torque in equation (<ref>), we simply write down the evolution equation of the angle θ, defined by n(r,t) = (cosθ, sinθ), with respect to only the magnetic torque:
[ -sinθ; cosθ ]d θ/dt = J_Bcosθ[ sin^2θ; -sinθcosθ ],
d θ/dt = -J_Bcosθsinθ.
Equation (<ref>) readily reads
d θ/dt = -2Γϵ_1 B^2cosθsinθ,
thus giving J_B = 2Γϵ_1 B^2.
§ DATA AVAILABILITY
All the raw data in the main text can be found in Zenodo. The link will be added after the manuscript is accepted.
§ CODE AVAILABILITY
The custom-made codes used for the analysis of the experiment and the numerical calculation are available from the corresponding author upon request.
§ ACKNOWLEDGEMENTS
We thank F. Sohrabi for technical support in bacterial culture, T. Kärki and L. Laiho for establishing protocols for bacterial culture, and C. Rigoni for building the Helmholtz coil setup. J.V.I.T acknowledges funding from ERC (803937). This work was carried out under the Academy of Finland Center of Excellence Program (2022-2029) in Life-Inspired Hybrid Materials (LIBER), project number (346112). K.B. acknowledges support from the Overseas Postdoctoral Fellowship of the Uehara Memorial Foundation.
§ AUTHOR CONTRIBUTIONS
K.B. and J.V.I.T. designed the project and wrote the manuscript. K.B. performed the experiments, the data analysis, and the theoretical analysis.
§ COMPETING INTERESTS
The authors declare no competing interests.
§ SUPPLEMENTARY VIDEOS
Video 1. Swimming bacteria in a dilute bacterial suspension with a 10% ferrofluid concentration under an application of the magnetic field B = 8.0 mT. The local trajectories of tracked bacteria for a duration of 10 are displayed. Scale bar, 100.
Video 2. Swimming bacteria in a dilute bacterial suspension with a 10% ferrofluid concentration under an application of the magnetic field B = 28.2 mT. The local trajectories of tracked bacteria for a duration of 10 are displayed. Scale bar, 100.
Video 3. The dense bacterial suspension with a 10% ferrofluid concentration under an application of the magnetic field B = 28.2 mT. (Left) The bright-field movie. (Right) The color map of the orientation field overlaid on the bright-field images. Scale bars, 100.
Video 4. The vorticity map with velocity streamlines obtained from Video 3. Scale bar, 100.
Video 5. The zoom-in of Supplementary Video 3. This video is played at 1/2 speed of real-time. Scale bar, 20.
|
http://arxiv.org/abs/2307.04391v1 | 20230710075459 | Vehicle Detection in 6G Systems with OTFS Modulation | [
"Pavel Karpovich",
"Tomasz P. Zielinski"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT",
"H.1.1"
] | emptyfancy
[t]1.0
Accepted for Konferencja Radiokomunikacji i Teleinformatyki KRiT-2023, Krakow 2023 (author's version)
VEHICLE DETECTION IN 6G SYSTEMS WITH OTFS MODULATION
Pavel Karpovich ^1,2;
Tomasz P. Zielinski^2;
[t]0.4
^1 Institute of Telecommunications AGH, Krakow mailto:[email protected],[email protected]
^2 Nokia Solutions and Networks, Krakow, mailto:[email protected]
2
The recently introduced orthogonal time frequency space modulation (OTFSM) is more robust to large narrow-band Doppler frequency shift than the orthogonal frequency division multiplexing (OFDM), used in the 5G standard. In this paper it is shown how the telecommunication OTFSM-based signal with random padding can be used with success in the 6G standard for detection of high-speed vehicles. Two approaches for detecting targets during the random padded OTFS based transmission are compared in the paper.
5G, 6G, OFDM, OTFSM, radar.
§ INTRODUCTION
In last few years, the scientific community attention has been focused on the discussion of next generation 6G communication. There are a lot of publications about what applications will drive the 6G network and what technologies should be included in the 6G standard to satisfy their requirements <cit.> <cit.>. Among large number of proposals, there are some that are most common, such as a terahertz wave and an integrated sensing and communication (ISAC) <cit.> <cit.>. This paper addresses a problem of adding a radar functionality to the communication systems of the future which will use higher frequency carriers and support high-mobility users.
The usage of terahertz band is challenging. Even relatively slow objects could generate very high Doppler frequency shifts. The strong Doppler effect limits the usage of the orthogonal frequency division multiplexing (OFDM) waveform which is at present de-facto a standard waveform in telecommunication systems (e.g. DVB-T2, Wi-Fi, LTE, 5G <cit.>). The OFDM is based on assumptions that linear convolution of the signal and the channel impulse response can be replaced by circular convolution, and that the channel impulse response is time-invariant or almost time-invariant. This allows to do a very fast and simple channel impulse response estimation. In case of the strong Doppler environment the assumption about constant channel impulse response is no longer valid since any channel coefficient can rotate in complex plane all the time due to the Doppler effect. Using OFDM in such conditions leads to errors in channel estimation and equalization, and eventually to inter-carrier-interference (ICI) and subsequently errors in bit detection.
Increasing sub-carrier spacing (SCS) in OFDM helps to deal with the strong Doppler frequency shift. However, this operation will increase also the OFDM cyclic prefix overhead and reduce transmission efficiency <cit.>. In order to eliminate the mentioned above disadvantage of the OFDM, the orthogonal time frequency and space (OTFS) modulation was recently introduced in <cit.>. Due to its unique features it is seriously treated as one of possible 6G waveforms <cit.>.
In this article simulation results for an ISAC system using the OTFS waveform are shown. We will start with the OTFS waveform description, present the delay-Doppler domain used in OTFS and discuss different pilot configurations exploited in it. Next, we will introduce the ISAC system using the OTFS waveform. Finally, in experimental part, we will show results from simulation of a radar part of the discussed RP-OTFS-based ISAC system.
In work <cit.> results from simulation of the communication part of the RP-OTFS transmission system were presented while this paper addresses simulation of the radar part of the system only. Practical verification of the general RP-OTFS based transmission and sensing concept was already presented in <cit.>.
§ ORTHOGONAL TIME FREQUENCY AND SPACE
The concept of the OTFS is shown in the figure <ref> <cit.> <cit.>. In comparison to OFDM, the OTFS is a two-dimensional modulation technique. In case of OTFS the modulation process looks as follows. At the beginning modulated IQ/QAM symbols are put into elements of the matrix A in figure <ref>, i.e. on the grid in a delay-Doppler (DD) domain. Then, the inverse Zak transform (inverse Fourier transform over the Doppler axis) <cit.> is used to transform (demodulate) data from the DD to a fast time - slow time (TT) domain. Finally, the obtained samples are reshaped from a matrix into a vector. The DD grid usage for data modulation makes the OTFS waveform attractive for ISAC since it is “native” domain for radars.
§.§ The delay-Doppler grid
Names of the DD grid directions reflect their physical sense. The delay direction (the first D in DD) consists of adjacent samples from time domain, the Δ t between samples is small. This direction is suitable for detecting small time changes in a observed signal. For example, in multi-path propagation environment, the difference between paths is not very big and can be estimated in the delay direction of the DD grid. But the delay direction is not suitable for observation of long time processes like the Doppler effect because Doppler frequencies are usually very small and require more time for estimation. In the Doppler direction (the second D of DD) only every Mth sample from time domain is used and this allows to estimate long time signal changes using FFTs with small sizes.
Parameters of the DD grid should be chosen taking into account that the sent OTFS modulated waveform will be used, both, for digital data transmission and moving vehicles detection. As sources of multi-path reflections could be treated as “radar targets”, the OTFS DD grid should fulfill, both, telecommunication and radar requirements. The DD grid has two parameters: M — the number of samples in the delay (fast time) direction, and N — the number of samples in the Doppler direction. Looking at figure <ref>, we can say that in Doppler direction a signal is practically decimated by M. Hence taking into account the Nyquist theorem, the maximum Doppler offset that can be estimated using such DD grid is f_d max = ±f_s 2 M, where f_s is a sampling rate. Resolution in the Doppler direction depends on N: increasing N and keeping f_s and M constant will increase the FFT length and the Doppler resolution. The resolution in delay direction depends only on f_s. Choosing f_s, M, N and carrier frequency f_c one can optimize the OTFS-based radar and digital transmission.
For example, lets choose the DD grid parameters for radar detection of many moving cars (reflections from stationary objects are not interesting for us). For maximum car speed of 60 m/s, carrier frequency f_c = 52.6 GHz (the maximum carrier frequency for 5G FR2), sampling ratio f_s = 50 MHz and maximum M = 1190, we can assume that the maximum Doppler frequency shift is equal to 21 kHz. Then, by fixing M=1024 and changing N, one can get different resolution of velocity estimation, changing from 9 m/s (for N=8) to 0.1 m/s (N=512), where values of N=8 and N=512 are exemplary ones.
§.§ Pilots configurations
As OFDM, the OTFS uses pilots for estimation of a channel impulse response (CIR). Their configurations are different. Here we will discuss two types of pilot placement strategies, shown in figure <ref>: a zero-padded one (ZP-OTFS) and a random-padded one (RP-OTFS). In both configurations the DD matrix A is divided into two parts: the data zone and the pilots zone. Every carrier in the DD grid is assigned to the pilot or data zone only, not to both of them the same time.
§.§.§ ZP-OTFS
In the ZP-OTFS, the pilot has a form of a rectangular zone of the DD matrix A, shown in figure <ref>, which is filled with zeros and have only one non-zero carrier in its center. We will call this non-zero carrier a pilot pulse. In case of the ZP-OTFS, the length of the pilot zone in the delay direction is twice bigger than length of the channel impulse response. In the Doppler direction the pilot zone usually makes use of all cells, as shown in figure <ref>.
Due to zeros surrounding the pilot pulse, the channel estimation process becomes very simple in the ZP-OTFS. There is also no interference between pilot and data zones as well as no ZP-OTFS symbol interference. In case of ZP-OTFS the channel impulse response is estimated by division of every cell of the received pilot zone by the known, transmitted pilot pulse (some threshold should be introduced here in order not to neglect reflection free samples of the pilot zone). The main disadvantage of the ZP-OTFS is low energy efficiency, because the pilot zone is very sparse.
§.§.§ RP-OTFS
The recently introduced RP-OTFS <cit.> <cit.> is designed to correct deficiencies of the ZP-OTFS. Here the pilot zone is filled by short OFDM symbols, treated as pilots, with random data inside — see figure <ref>. In case of the ZP-OTFS, discussed earlier, the data zone is generated in the DD domain and transformed to fast-slow time domain by the inverse Zak transform. In turn, in case of the RP-OTFS, OFDM symbols of pilots are inserted directly into fast-slow time grid (without the inverse Zak transformation). Absence of zeros in the pilot zone increases signal-to-noise ratio (SNR) and causes that the RP-OTFS application is more efficient than the ZP-OTFS in CIR estimation what is very important for both for communication and radar.
The CIR estimation begins with conventional OFDM channel estimation with the only difference that we treat the whole OFDM symbol as a pilot. After that, when all CIR momentum estimates are found using all OFDM symbols (having transmitted and received pilots one can easily estimates CIR taps from them), we transform the matrix of CIR taps to the DD domain by the Zak transform, i.e. by performing FFT over the CIR matrix rows. Note, that in the RP-OTFS the Zak transform is performed upon CIR estimates, do not upon time samples of OFDM symbols which were used for CIR calculation.
There are two disadvantages of the RP-OTFS application. Firstly, the length of cyclic prefix (CP) of the OFDM-based pilot should be equal to the OFDM symbol length, i.e. it is long and the CP overhead reduces the achievable bit-rate. Secondly, we assume that the CIR is quasi time-invariant and, therefore, we can not use long OFDM pilots for very high frequency Doppler channels.
§ INTEGRATED SENSING AND COMMUNICATION (ISAC)
In case of ISAC <cit.><cit.>, usually, the communication processing is the same as in conventional system. In this paper we are concentrating our attention on peculiarity of RP-OTFS radar processing since efficiency of the RP-OTFS based communication sub-system has been already tested <cit.> <cit.>. Two approaches of target detection are analyzed: correlation-based and pilot-based.
The first correlation-based method origins from classical radar processing in which a cross ambiguity function is used <cit.>: transmitted, reference signal (known, re-modulated in the receiver or acquired by special reference antenna) is shifted in time and frequency and correlated with the received, surveillance signal. The problem of the correlation base radar approach is that usually it is hard to find weak signal reflections, coming from small, moving objects, on the background of strong signal reflections caused by buildings (the radar clutter problem) <cit.>.
In the second pilot-based approach of vehicle detection transmitted pilots, known in the receiver, are used to CIR estimation <cit.>. In case of reflections coming from moving vehicles some CIR taps are complex-value numbers that oscillates in time with frequency of Doppler frequency shift caused the reflecting object movement. Here, we treat radar targets as sources of multi-path propagation. By CIR analysis we can retrieve information about signal reflections and about reflecting objects.
The pilot based ISAC system requires non distorted CIR estimates for Doppler frequency shifts extraction. As mentioned in the introduction, high Doppler objects can not be detected by OFDM. This also limits application of pilot-based radars making use of OFDM-based pilots.
§ EXPERIMENTAL PART
In experimental part we simulated a radar performance of the discussed RP-OTFS-based ISAC system. Parameters of the applied OTFS-based signal was following: size of the grid in delay and doppler direction 64x256 (MxN), length of the pilot zone 16 (meaning of L is explained in fig. <ref>), modulatiotion 4-QAM, carrier frequency 4 GHz and bandwidth 20 MHz. In simulation we used different target velocities in order to test the system performance in different conditions.
Delay-Doppler (distance-velocity) radar maps for a target moving with velocity about 139 m/s (500 km/h), calculated for both tested radar approaches (correlation based and pilot based ones), are shown in figure <ref>. In both methods integration/observation time 100 milliseconds was used. Input signal had signal-to-noise ratio (SNR) equal to 0 decibels. In both cases one can clearly see sharp peaks in the delay-Doppler (distance-velocity) matrix which correspond to parameters of moving vehicles. However for CAF two additional lower peaks are visible which are generated by the CP of the pilot part of the RP-OTFS waveform. As in case of the pilot-based approach we eliminate CP from signal processing chain, such peaks are missing in DD map of this method. In case of correlation-based radar mean level of background side-lobes, surrounding the detection peak, is equal to about -30 decibels while for pilot-based radar -40 decibels.
In figures <ref> and <ref> processing gain charts for both discussed RP-OTFS-based radars are shown, i.e. expressed in decibels root mean square (RMS) value of the method noise floor (visible in figure <ref>) as a function of signal to noise ratio (SNR) of an input signal. Simulated maximum vehicle speed (v_m) was equal to 50 (13.9 m/s) albeit 500 km/h (139 m/s) and integration/observation time (T_i) was varying from 10 ms to 200 ms. In figure <ref> both tested RP-OTFS-based radars are compared: it is seen that the pilot-based version outperforms the correlation-based one in DD detection hight, i.e. in noise robustness.
§ DISCUSSION
The main limitation factor in case of the correlation based RP-OTFS radar is its high level of CAF side-lobes, resulting in significantly lower output SNR in comparison to the pilot-based radar. Figures <ref> and <ref> confirm quantitative conclusions which can be drown from figure <ref>.
As mentioned before, in the development of the discussed RP-OTFS-based ISAC system we have assumed that channel pulse response is quasi time-invariant in the pilot zone. In case of high-mobility Doppler channels this assumption is fulfilled only approximately. This fact will limit the maximum processing gain of the presented pilot based RP-OFTS radar. Consequences of this method drawback will increase for higher velocities as it is visible in fig. <ref>. The same effect will be observed also when the pilot zone length will be increased. Nevertheless, obtained results confirm that the pilot-based radar outperforms the correlation-based one in terms of noise robustness.
§ CONCLUSION
Two moving vehicles detection approaches based on the RP-OTFS ISAC system were compared in this paper. The main limitation factor of the correlation based radar method is high level of CAF side-lobes, apart from existence of two additional peaks in CAF which are caused by repetition of the pilot samples. Detection of targets with low radar cross section on the background of strong background signal, so called clutter, e.g. direct path signal, is very challenging here. Presence of many ghost peaks in the delay-Doppler (distance-velocity) map makes subsequent processing steps in this method very challenging.
In turn, the pilot based RP-OTFS radar is characterized by lower level of side-lobes in the delay-Doppler map and it does not have extra peaks caused by the repeating pilot samples. But this approach is sensitive to the quality of the channel impulse response estimation. In order to minimize error of the channel impulse response estimate, and in consequence error of the moving object detection, we need to keep pilot zone as short as possible.
99
6G_harsh
H. Tataria et al., "6G Wireless Systems: Vision, Requirements, Challenges, Insights, Opportunities," Proc. IEEE, vol. 109, no. 7, pp. 1166-1199, July 2021.
6G_vision
W. Saad, M. Bennis and M. Chen, "A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems," IEEE Network, vol. 34, no. 3, pp. 134-142, May/June 2020.
isac1
F. Liu et al., “Integrated Sensing and Communications: Toward Dual-Functional Wireless Networks for 6G and Beyond,” IEEE J. on Selected Areas in Comm., vol. 40, no. 6, pp. 1728-1767, June 2022.
isac2
Z. Wei et al., “Integrated Sensing and Communication Signals Towards 5G-A and 6G: A Survey,” IEEE Internet of Things Journal, early access, 2023.
ofdm_numerology
Josue Flores de Valgas, Jose F. Monserrat, Hüseyin Arslan, "Flexible Numerology in 5G NR: Interference Quantification and Proper Selection Depending on the Scenario", Mobile Information Systems, vol. 2021, Article ID 6651326, 9 pages, 2021.
otfs1
R. Hadani et al., "Orthogonal Time Frequency Space Modulation," 2017 IEEE Wireless Comm. and Networking Conf. (WCNC), San Francisco, CA, USA, 2017, pp. 1-6, 2017.
otfs2
Z. Wei at al., “Orthogonal Time-Frequency Space Modulation: A Promising Next-Generation Waveform,” IEEE Wireless Comm., vol. 28, iss. 4,
pp. 136-144, 2021.
my_rp1
P. Karpovich and T. P. Zielinski, "Random-Padded OTFS Modulation for Joint Communication and Radar/Sensing Systems," 2022 23rd Int. Radar Symp. (IRS), pp. 104-109, Gdansk 2022.
my_rp2
P. Karpovich et al., “Field Tests of a Random-Padded OTFSM Waveform in a Joint Sensing and Communication System,” IEEE ICC Int. Communications Conf., Rome 2023.
zak
H. Bolcskei and F. Hlawatsch, "Discrete Zak transforms, polyphase transforms, and applications," in IEEE Trans. on Signal Processing, vol. 45, no. 4, pp. 851-866, April 1997.
radar
M.A. Richards, “Fundamentals of Radar Signal Processing,” McGraw-Hill Education, 2014.
my_dvbt2
P. Karpovich et al., "Practical Results of Drone Detection by Passive Coherent DVB-T2 Radar," 21st Int. Radar Symp. (IRS), pp. 77-81, Warsaw 2020.
ofdm_base_radar
M. Braun et al., "Parametrization of joint OFDM-based radar and comm. systems for vehicular applications," 2009 IEEE Int. Symp. on Personal, Indoor & Mobile Radio Comm., pp. 3020-3024, Tokyo 2009.
|
Subsets and Splits